(Roughly) Daily

Posts Tagged ‘intelligence

“Don’t look away. Look straight at everything. Look it all in the eye, good and bad.”*…

Frustrated by fragmented war news, Elie Habib built World Monitor, a platform that fuses global data– everything from military activity and intel hotspots to climate events and cyber threats– to track trouble as it unfolds. Lilian Wagdy reports…

Elie Habib doesn’t work in the defense or intelligence industries. Instead, he runs Anghami, one of the Middle East’s largest music streaming platforms. But as missiles began flying across the region, a side project he coded earlier this year suddenly became something bigger: an open-source dashboard people around the world were using to track the war in real time…

… The idea emerged as headlines began colliding in ways that felt impossible to follow. “The news became genuinely hard to parse,” he says. “Iran, Trump’s decisions, financial markets, critical minerals, tensions compounding from every direction simultaneously.”

Traditional media wasn’t solving the problem he had in mind. “I didn’t need a news aggregator,” he says. “I needed something that showed me how these events connect to each other in real time. The existing OSINT tools that did this cost governments and large enterprises tens of thousands of dollars annually.”…

… The platform processes a messy stream of global data, bypassing social media noise to pull facts directly from the source.

“The system ingests 100-plus data streams simultaneously,” Habib notes. The result is a constantly updating map of global tensions: conflict zones with escalation scores, military aircraft broadcasting positions through ADS-B transponders, ship movements tracked through AIS signals, nuclear installations, submarine cables, internet outages and satellite fire detections.

“Everything is normalized, geolocated and rendered on a WebGL globe capable of displaying thousands of markers without frame drops,” Habib says.

The underlying architecture wasn’t built from scratch. Much of it draws on the same principles used to process massive volumes of streaming data…

… Processing hundreds of live data streams during a military conflict raises a question: How do you verify information fast enough to keep the system moving?

Habib’s answer was to remove human editors entirely. “Zero editorializing,” he says. “No human editor makes a call.”

Instead, Habib says the platform relies on a strict source hierarchy. Wire services and official channels such as Reuters, AP, the Pentagon and the UN sit at the top tier. Major broadcasters including the BBC and Al Jazeera follow, along with specialist investigative outlets such as Bellingcat. In total, he says the system processes about 190 sources, assigning higher confidence scores to more reliable ones.

Software then scans incoming reports for major events and emerging patterns. If multiple credible sources report the same development within minutes, the system flags it as a breaking alert. But headlines alone are not enough.

Because online claims can be unreliable, the platform also looks for physical signals on the ground. It tracks disruptions such as internet blackouts, diverted military flights, halted cargo ships and satellite-detected fires. “A convergence algorithm then checks how many distinct signal types activate in the same geography simultaneously,” Habib says.

“One signal is noise. Three or four converging in the same location is the signal worth surfacing,” Habib says. If an internet outage coincides with diverted aircraft and a satellite heat signature in the same area, the map flags a potential escalation.

Habib acknowledges that removing humans from the loop carries risks. “The multi-tier source-credibility system and convergence algorithm [are a] substitute for editorial judgment,” he says. “Whether that creates blind spots in genuinely novel scenarios, an event with no historical baseline, is a real architectural question the system doesn’t fully resolve.”…

… Habib does not plan for the platform to become a business. “World Monitor started as a personal learning project,” he says. But the experiment quickly grew beyond that. Developers from around the world began contributing code and ideas, helping expand the system’s capabilities.

Now the project is shifting toward a broader goal. “The direction shifts from pure conflict tracking toward broader world signal understanding and acting on these signals,” Habib says.

Instead of simply mapping events after they happen, the platform is increasingly designed to detect patterns before they become headlines, Habib says. “The architecture is moving toward predicting where signals converge before events become news.”…

More at: “How a Music Streaming CEO Built an Open-Source Global Threat Map in His Spare Time,” from @lilianwagdy81.bsky.social and @wired.com. (JIC of paywall trouble, here is an archived link.)

Then, try out World Monitor yourself.

* Henry Miller

###

As we pay attention, we might recall that today is the anniversary of an event that would surely have made Wolrd Monitor’s map if that tool hasd been around back then: on the this date in 2011, Tōhoku earthquake (a 9.0–9.1 undersea megathrust earthquake occurred in the Pacific Ocean, 45 mi east of the Oshika Peninsula of the Tōhoku region— the most powerful earthquake ever recorded in Japan, and the fourth most powerful earthquake recorded in the world since modern seismography began in 1900.

The quake was followed by a tsunami that killed thousands and caused the devastation of whole cities. Together, they did damage estimated to have cost well into the tens of billions of US dollars. Famously, the quake and subsequent tsunami caused the shutdown of eleven nuclear reactors in power plants in the region. The Fukushima reactors were especially heavily damaged, and leaked radioactive waste water, leading to radiation levels outside the plant that were up to eight times normal levels.

Mechanism of 2011 Tōhoku earthquake (source)

“I’ve been discovering, much to my dismay, that I’m not a criminal mastermind or anything. I’m just brute force and my powers in no way include super-intelligence, which kind of pisses me off.”*…

A young boy with short hair, wearing a collared shirt, is intently reading a book with a focused expression in a dimly lit setting.

How do we accomodate ourselves to the prospect of an intelligence far greater than our own? In a consideration of J.D. Beresford’s The Hampdenshire Wonder (the first recognized appearance of the concept in modern Englis-language literature), Ted Chiang unspools the intellectual and cultural history of this now-prevalant trope…

J.D. Beresford’s The Hampdenshire Wonder is generally considered to be the first fictional treatment of superhuman intelligence, or “superintelligence.” This is a familiar trope for readers of science fiction today, but when the novel was originally published in 1911 it was anything but. What intellectual soil needed to be tilled before this idea could sprout?

At least since Plato, Western thought has clung to the idea of a Great Chain of Being, also known as the scala naturae, a system of classification in which plants rank below animals; humans rank above animals but below angels; and angels rank above humans but below God. There was no implied movement to this hierarchy; no one expected that plants would turn into animals given enough time, or that humans would turn into angels.

But by the 1800s, naturalists like Lamarck were questioning the assumption that species were immutable; they suggested that over time organisms actually grew more complex, with the human species as the pinnacle of the process. Darwin brought these speculations into public consciousness in 1859 with On the Origin of Species, and while he emphasized that evolution branches in many directions without any predetermined goal in mind, most people came to think of evolution as a linear progression.

Only then, I think, was it possible to conceive of humanity as a point on a line that could keep extending, to imagine something that would be more than human without being supernatural.

Darwin’s half-cousin, Francis Galton, was the first to suggest the idea that mental attributes like intelligence could be quantified. Galton published a volume called Hereditary Genius in 1869, and during the 1880s and ’90s he measured people’s reaction times as a way of gauging their mental ability, pioneering what we now call the field of psychometrics. By 1905, Alfred Binet had introduced a questionnaire to measure children’s intelligence; such questionnaires would evolve into IQ tests. The validity of psychometrics is quite controversial nowadays, as people disagree about what “intelligence” means and to what extent it can be measured. Some modern cognitive scientists do not consider the term intelligence particularly useful, instead preferring to use more specific terms like executive function, attentional control, or theory of mind. In the future “intelligence” may be regarded as a historical curiosity, like phlogiston, but until we develop a more precise vocabulary, we continue to use the term. Our contemporary notion of intelligence first gained currency around the time that Beresford was writing, and one can see how that converged with the idea of the superhuman in The Hampdenshire Wonder.

The titular character of The Hampdenshire Wonder is a boy named Victor Stott…

… Victor is born with an enormous head but an ordinary body, which disappoints his athletic father but also points to certain assumptions we have about the relationship between the mental and the physical. Beresford could have made Victor both an athlete and a genius, but he opted instead to follow a trope perhaps originated by Wells: the idea that evolution is pushing humanity toward a giant-brained phenotype, which is itself implicitly premised on the idea that mental ability and physical ability are in opposition to one another. This has remained a common trope in science fiction, although there are occasional depictions of mental and physical ability going hand in hand…

[Chiang traces the development of the “superintelligence,” the problems it raises, and the ways that they are treated in The Hampdenshire Wonder and elsewhere– “whatever your wisdom, you have to live in a world of comparative ignorance, a world which cannot appreciate you, but which can and will fall back upon the compelling power of the savage—the resort to physical, brute force.”…]

… In 1993 [Vernor] Vinge [here] argued that progress in computer technology would inevitably lead to a machine form of superintelligence. He proposed the term “the singularity” to describe the date—in the next few decades—beyond which events would be impossible to imagine. Since then, the technological singularity has largely replaced biological superintelligence as a trope in science fiction. More than that, it has become a trope in the Silicon Valley tech industry, giving rise to a discourse that is positively eschatological in tone. Superintelligence lies on the other side of a conceptual event horizon. When considered as a purely fictional idea, it imposes a limit on the kind of narratives one can tell about it. But when you start imagining it as something that could exist in reality, it becomes an end to human narratives altogether.

The Hampdenshire Wonder does posit a kind of eschatological scenario, but of a completely different order. After Victor’s downfall, Challis recounts the conclusion he came to after a conversation he’d had with the child, revealing a profound terror about the finiteness of knowledge:

Don’t you see that ignorance is the means of our intellectual pleasure? It is the solving of the problem that brings enjoyment—the solved problem has no further interest. So when all is known, the stimulus for action ceases; when all is known there is quiescence, nothingness. Perfect knowledge implies the peace of death

… The idea that the search for understanding will inevitably lead to a kind of cognitive heat death is an interesting one. I don’t believe it and I doubt any scientist believes it, so it’s curious that Beresford—clearly an admirer of scientists—apparently did. Challis talks about the need for mysteries that elude explanation, which is a surprisingly anti-intellectual stance to find in a novel about superintelligence. While there is arguably a strain of anti-intellectualism in stories where superintelligent characters bring about their own downfall, those can just as easily be understood as warnings about hubris, a literary device employed as far back as the first recorded literature, “The Epic of Gilgamesh.” But The Hampdenshire Wonder, in its final pages, is making an altogether different claim: The pursuit of knowledge itself is ultimately self-defeating.

Nowadays we associate the word “prodigy” with precocious children, but in centuries past the word was used to describe anything monstrous. Victor Stott clearly qualifies as a prodigy in the modern sense, but he qualifies in the older sense too: Not only does he frighten the ignorant and superstitious, he induces a profound terror in the educated and intellectual. Seen in this light, the first novel about superintelligence is actually a work of horror SF, a cautionary tale about the dangers of knowing too much…

Superintelligence and its discontents, from @ted-chiang.bsky.social‬ in @literaryhub.bsky.social‬.

Another powerful (and not unrelated) piece from Chiang: “Will A.I. Become the New McKinsey?

Kelly Thompson, The Girl Who Would Be King

###

As we wrestle with reason, we might wish a Joyeux Anniversaire to silk weaver Joseph Marie Jacquard; he was born on this date in 1752.  Jacquard’s 1805 invention of the programmable power loom, controlled by a series of punched “instruction” cards and capable of weaving essentially any pattern, ignited a technological revolution in the textile industry… indeed, it set off a chain of revolutions: it inspired Charles Babbage in the design of his “Difference Engine” (the ur-computer), and later, Herman Hollerith, who used punched cards in the “tabulator” that he created for the 1890 Census… and in so doing, pioneered the use of those cards for computer input… which is to say that Jacquard helped create the preconditions for AI (among all of the other things that computers can do).

Portrait of Joseph Marie Jacquard, a 19th-century inventor known for creating the programmable power loom.

source

“The brain has corridors surpassing / Material place…”*

A flock of starlings forms a complex murmurating pattern in the evening sky against a blue backdrop.

Our brains, Luiz Pessoa suggests, are much less like machines than they are like the murmurations of a flock of starlings or an orchestral symphony…

When thousands of starlings swoop and swirl in the evening sky, creating patterns called murmurations, no single bird is choreographing this aerial ballet. Each bird follows simple rules of interaction with its closest neighbours, yet out of these local interactions emerges a complex, coordinated dance that can respond swiftly to predators and environmental changes. This same principle of emergence – where sophisticated behaviours arise not from central control but from the interactions themselves – appears across nature and human society.

Consider how market prices emerge from countless individual trading decisions, none of which alone contains the ‘right’ price. Each trader acts on partial information and personal strategies, yet their collective interaction produces a dynamic system that integrates information from across the globe. Human language evolves through a similar process of emergence. No individual or committee decides that ‘LOL’ should enter common usage or that the meaning of ‘cool’ should expand beyond temperature (even in French-speaking countries). Instead, these changes result from millions of daily linguistic interactions, with new patterns of speech bubbling up from the collective behaviour of speakers.

These examples highlight a key characteristic of highly interconnected systems: the rich interplay of constituent parts generates properties that defy reductive analysis. This principle of emergence, evident across seemingly unrelated fields, provides a powerful lens for examining one of our era’s most elusive mysteries: how the brain works.

The core idea of emergence inspired me to develop the concept I call the entangled brain: the need to understand the brain as an interactionally complex system where functions emerge from distributed, overlapping networks of regions rather than being localised to specific areas. Though the framework described here is still a minority view in neuroscience, we’re witnessing a gradual paradigm transition (rather than a revolution), with increasing numbers of researchers acknowledging the limitations of more traditional ways of thinking…

Complexity, emergence, and consciousness: “The entangled brain” from @aeon.co. Read on for the provocative details.

* Emily Dickinson

###

As we think about thinking, we might send amibivalent birthday greetings to Robert Yerkes; he was born on this date in 1876. A psychologist, ethnologist, and primatologist, he is best remembered as a principal developer of comparative (animal) psychology in the U.S. (his book The Dancing Mouse (1908), helped established the use of mice and rats as standard subjects for experiments in psychology) and for his work in intelligence testing.

But in his later life, Yerkes began to broadcast his support for eugenics. These views are broadly considered specious– based on outmoded/incorrect racialist theories— by modern academics.

A black and white portrait of Robert Yerkes, an early 20th-century psychologist, wearing a suit and tie, with a neutral expression.

source

“Good librarians are natural intelligence operatives”*…

A historic black and white photograph of a library reference desk, featuring several librarians working among piles of books, tall windows in the background enhancing the ambiance.

The estimable Richard Ovenden (see here, here, here, and here) on Elyse Graham’s new Book and Dagger

At dinner parties, it has always been a struggle to get random people to be interested in my work as a librarian. Indeed, throughout my career, I have battled with stereotypes of my profession. We are often pigeonholed as being nerdy, rules obsessed, tweed wearing, bespectacled, and, above all, “dusty.” At least “nerd” has been transformed from negative to positive since the rise of digital technologies over the past few decades. Sometimes, with strangers, I have used the term “archivist” to describe what I do, but that hasn’t helped much.

So my heart rate soared—as would that of any librarian like me—at the idea suggested by the mere title of Book and Dagger that librarians and archivists could be involved in secret and dangerous tasks in a war, risking their lives and taking an active role in fighting against an evil tyrannous oppressor. Perhaps those tweeds are just camouflage.

During World War II, as shown in Elyse Graham’s new book Book and Dagger, librarians, archivists, and scholars played an unexpected and important role in the intelligence services of the United States (and to a lesser extent, of Great Britain). She writes with verve and pace, making this book an easy and enjoyable one to read. Best of all, Graham argues that the humanities—and those librarians and scholars that came from within the discipline—brought special expertise, experience, and attributes that were critical to the direction of strategy, the ultimate victory of the war, and the defense of democracy in the face of tyranny…

[Ovenden unpacks the turf covered…]

… Graham’s study is certainly heartwarming for any librarian, archivist, or humanities scholar seeking confirmation that the skills necessary for their day jobs are directly transferrable to the defense of the realm and of democracy, gaining a utility beyond education, scholarship, and learning to that most visceral of tasks—the waging of war. Also heartwarming is the value which Graham’s account places on the infrastructure of the humanities—the libraries and archives themselves, and the sheer task of acquiring, managing, and preserving knowledge: buying books can keep us free!

Today, the humanities are in a funding crisis, and libraries and archives are being actively defunded by the state. Graham’s book is thus a timely reminder that the skills that are taught and honed in the humanities, in academic departments and in the libraries and archives that support humanistic study, are of vital importance not just to study the past. In fact, they are crucial to defend us in the present, so that we all might enjoy a secure and free future. That’s something I am willing to fight for…

The crucial roles played by unexpected combatants in World War II: “Secrets in the Stacks,” from @richove.bsky.social‬ in @publicbooks.bsky.social‬.

(Image above: source)

* “Good librarians are natural intelligence operatives. They possess all of the skills and characteristics required for that work: curiosity, wide-ranging knowledge, good memories, organization and analytical aptitude, and discretion.” – Marilyn Johnson, This Book Is Overdue!: How Librarians and Cybrarians Can Save Us All

###

As we check it out, we might recall that it was on this date in 1953 that the first public television station in the U.S.– KUHT, operated in Houston, Texas by the University of Houston– began operation. It was first station to broadcast under an educational non-profit license in the United States, one of the earliest member stations of National Educational Television (which was succeeded by PBS) and offered the university’s first televised college credit classes. Running 13 to 15 hours weekly, these telecasts accounted for 38 percent of the program schedule, mostly airing at night so that students who worked during the day could watch them. By the mid-1960s, with about one-third of the station’s programming devoted to educational programming, more than 100,000 semester hours had been taught on KUHT.

Historical sign for KUHT Channel 8 in Houston, showcasing its early days as a public television station.

source

Written by (Roughly) Daily

May 25, 2025 at 1:00 am

“It is the same in love as in war; a fortress that parleys is half taken”*…

The AT&T Long Lines Building, designed by John Carl Warnecke at 33 Thomas Street in Manhattan, under construction ca. 1974.

Further to yesterday’s post on historic battlements, Zach Mortice on a modern fortress that’s become a go-to location for film and television thrillers…

When it was completed in Lower Manhattan in 1974, 33 Thomas Street, formerly known as the AT&T Long Lines Building, was intended as the world’s largest facility for connecting long-distance telephone calls. Standing 532 feet — roughly equivalent to a 45-story building — it’s a mugshot for Brutalism, windowless and nearly featureless. Its only apertures are a series of ventilation hoods meant to hide microwave-satellite arrays, which communicate with ground-based relay stations and satellites in space. One of several long lines buildings designed by John Carl Warnecke for the New York Telephone Company, a subsidiary of AT&T, 33 Thomas Street is perhaps the most visually striking project in the architect’s long and influential career. Embodying postwar American economic and military hegemony, the tower broadcasts inscrutability and imperviousness. It was conceived, according to the architect, to be a “skyscraper inhabited by machines.”

“No windows or unprotected openings in its radiation-proof skin can be permitted,” reads a project brief prepared by Warnecke’s office; the building’s form and dimensions were shaped not by human needs for light and air, but by the logics of ventilation, cooling, and (not least) protection from atomic blast. “As such, the design project becomes the search for a 20th-century fortress, with spears and arrows replaced by protons and neutrons laying quiet siege to an army of machines within.” The purple prose of the project brief was perhaps inspired by the client. AT&T in the 1970s still held its telecom monopoly, and was an exuberant player in the Cold War military-industrial complex. Until 2009, 33 Thomas Street was a Verizon data center. And in 2016, The Intercept revealed that the building was functioning as a hub for the National Security Administration, which has bestowed upon it the Bond-film-esque moniker Titanpointe.

Computers at Titanpointe have monitored international phone calls, faxes and voice calls routed over the internet, and more, hoovering up data from the International Monetary Fund, the World Bank, and U.S. allies including France, Germany, and Japan. 33 Thomas Street, it turns out, is exactly what it looks like: an apocalypse-proof above-ground bunker intended not only to symbolize but to guarantee national security. For those overseeing fortress operations at the time of construction, objects of fear were nuclear-armed Communists abroad and a restive youth population at home, who couldn’t be trusted to obey the diktats of a culture that had raised up some in previously inconceivable affluence; an affluence built on the exploitation and disenfranchisement of people near and far.

By the time the NSA took over, targets were likely to be insurgents rejecting liberal democracy and American hegemony, from Islamic fundamentalists to world-market competitors in China, alongside a smattering of Black Lives Matter activists. For those outside the fortress, in the Nixon era as in the present, the fearful issue was an entrenched and unaccountable fusion of corporate and governmental capability, a power that flipped the switches connecting the world. At the same time, popular culture had begun, in the 1970s, to register a paranoia that has only intensified — the fear that people no longer call the shots. In its monumental implacability, Titanpointe seems to herald a posthuman regime, run by algorithm for the sole purpose of perpetuating its own system.

It is, in other words, a building tailor made for spy movies.

John Carl Warnecke did not realize, of course, that he was storyboarding a movie set…

How (and why) a windowless telecommunications hub in New York City embodying an architecture of surveillance and paranoia became an ideal location for conspiracy thrillers: “Apocalypse-Proof,” from @zachmortice in @PlacesJournal. Fascinating.

Margaret of Valois

###

As we ponder impenetrability, we might recall that it was on this date in 1780, during the American Revolutionary War, that Benedict Arnold, commander of the American fort at West Point, passed plans of the bastion to the British.

Portrait by Thomas Hart, 1776 (source)