(Roughly) Daily

Posts Tagged ‘telecommunications

“Nanotechnology is an idea that most people simply didn’t believe”*…

A person in a protective suit and gloves holds a microchip, showcasing nanotechnology in a cleanroom environment.

Indeed, in the 1980s, even as nanotech pioneer Erik Drexler, a graduate student at MIT at the time, was doing the early work of defining and charting a course for the nascent field, MIT’s departments of electric engineering and computer science refused to approve his Ph.D. topic and plan of study (though ultimately the Media Lab did, and Erik earned his doctorate).

Today the reality– and centrality– of the field are only too apparent and have become the subject of trade and industrial policy… because while the U.S. led in the development of nanotech science, it lags in manufacturing and commercialization. In an excerpt from their book Industrial Policy for the United States: Winning the Competition for Good Jobs and High-Value Industries, Ian Fletcher and Marc Fasteau explain…

Nanotechnology is the manipulation of matter at scales from a fraction of a nanometer to a few hundred nanometers — sizes between individual atoms and small single-celled organisms — at which it has radically different properties. Nanotech is already significant in many industries. Integrated circuits are a form of nanotech. Other nanotech provides the light, strong composites in aircraft and space vehicles. Still other nanotech powers the solid-state lasers used to transmit information through the internet and the light-emitting diodes in LED light bulbs and flat-screen TVs. Nanotech also makes possible solar cells, the batteries in electric cars, and medical technologies such as vaccines. It is thus the unifying thread of many of today’s most advanced technologies. Unfortunately, America is falling behind.

In the future, nanotech-based quantum computing and communications will lead to more powerful computers, transforming national security and internet commerce by making currently secret communications insecure. Medical nanotechnologies will permit targeted interventions at the cellular level, providing new weapons against diseases, biological weapons, and defenses against them. China is known to be working on these. 

Much of the science underpinning these advances was developed at firms and universities in the US. But the huge manufacturing industries built on it are mostly overseas. For example, the organic light-emitting diode (OLED) technology Kodak created didn’t save that firm from going bankrupt in 2012. But it did enable lucrative businesses for Korea’s Samsung, to whom Kodak licensed the technology, and LG, which bought Kodak’s entire OLED business in 2009. Today, American firms like Nanosys and Universal Display develop important nanotechnologies, but do not actually manufacture the end products and are thus relatively small.

How did the US get itself into this situation? A major government program, the National Nanotechnology Initiative (NNI), has been funded since 2001, but Washington failed to appreciate the importance of having both a technology and a manufacturing strategy. The prevailing wisdom was that if the academic science was supported, mass manufacturing would follow automatically. By contrast, successful rival nations in nanotech have focused on making these technologies manufacturable at scale, employing every policy tool from R&D subsidies to cheap capital to tariffs. A 2020 National Academies review of the NNI urged that the US recognize that ‘the recent, focused, and in some cases novel commercialization approaches of other nations may be yielding better societal outcomes.’…

A little wonky, but both fascinating and important: “Nanotechnology,” via the invaluable Delanceyplace.com.

(Image above: source)

Ralph Merkle

###

As we get small, we might send miniscule birthday greetings to a man who whose work has contributed to the development of medical applications of nanotech: Bert Sakmann; he was born on this date in 1942. A  cell physiologist, he shared the Nobel Prize in Physiology or Medicine (with Erwin Neher) in 1991 for their work on “the function of single ion channels in cells”– work made possible in part by their invention of the patch clamp.

Black and white portrait of Bert Sakmann, a cell physiologist, wearing glasses and a dark sweater.

source

“Only connect!”*…

… Mobile phone companies are doing their best to oblige– and so far over half of the world’s population is connected to mobile internet. But as Khadija Alam and Russell Brandom report (in the indispensable Rest of World) growing that number is getting harder. (Read to the end for a twist)…

When Facebook hit 1 billion users in 2012, CEO Mark Zuckerberg said that when it comes to getting another billion users, “The big thing is obviously going to be mobile.” In an interview at the time, Zuckerberg told Bloomberg, “As more phones become smartphones, it’s just this massive opportunity.”

Clearly, he was correct. A recent survey from Global System for Mobile Communications Association Intelligence (GSMA), the research wing of a U.K.-based organization that represents mobile operators around the world, found that 4.6 billion people across the globe are now connected to mobile internet — or roughly 57% of the world’s population. 

Now, the rate of new mobile internet subscriber growth is slowing. From 2015 to 2021, the survey consistently found over 200 million coming online through mobile devices around the world each year. But in the last two years, that number has dropped to 160 million. Rest of World analysis of that data found that a number of developing countries are plateauing in the number of mobile internet subscribers. That suggests that in countries like Pakistan, Bangladesh, Nigeria, and Mexico, the easiest populations to get online have already logged on, and getting the rest of the population on mobile internet will continue to be a challenge. GSMA collects data by surveying a nationally representative sample of people in each country, and then it correlates the results with similar studies.

Max Cuvellier Giacomelli, the head of the Mobile for Development program at GSMA, said that large swaths of the world’s population still don’t have access to mobile internet primarily because of affordability. Although the cost of data has dropped radically in recent years, the International Telecommunication Union, a UN agency focused on information and communications technologies, notes that huge disparities between regions persist. The cost of data in Africa, for example, is more than twice that of the Americas, the second most expensive region…

… In countries including China, the U.S., and Singapore, a high share of the population is already connected to mobile internet — 80%, 81%, and 93%, respectively. So it’s no surprise that the rate of mobile internet subscriptions has slowed.

But the rate of new users has also slowed in countries including Bangladesh, Nigeria, and Pakistan — where only 37%, 34%, and 24% of the population currently use mobile internet.

Coverage continues to be a challenge, although data suggests that the issue is improving relatively quickly. Just 350 million people across the world, or 4% of the global population, still live in areas that are not covered by a mobile broadband network. According to GSMA, sub-Saharan Africa has the highest coverage gap of any global region. But between 2021 and 2023, mobile coverage in this area expanded from 83% to 87%.

Furthermore, recent advances in satellite technology have the potential to close this coverage gap by bringing mobile internet networks to rural or remote areas that lack mobile infrastructure. SpaceX’s Starlink, for example, is now available in over 100 countries and provides a roaming plan…

… Even in countries with high rates of mobile internet subscription, there are still stubborn pockets of people with no mobile internet access. In China, for example, 80% of the population has access to mobile internet. But subscription rates among the remaining 280 million people are slowing. Recent advances in satellite technology could bring mobile internet to new users in the country, especially in rural areas. In August, China began launching a satellite internet network [the Qianfan Constellation], set to rival SpaceX’s Starlink, in an effort to bring everyone online.

What happened to the “next billion” internet users? They’re already online: “New data shows the number of new mobile internet users is stalling,” from @khadijaalam_ and @russellbrandom in @restofworld.

Your correspondent finds himself pondering the final sentence in the piece: While the on-boarding of the unconnected 47% may be the result of a patchwork of local efforts, it’s clearly the goal of Starlink and the Qianfan Constellation to centralize connectivity… and the company– or government or culture– that controls the means of communication has a great deal of influence on what gets communicated and how. Nearly half the world’s population is in play, with all that that entails for geopolitics and geoeconomics; for example, see here (and the links therein)…

* E. M. Forster, Howards End

(R)D will be on its traditional Thanksgiving hiatus from today. Regular service will resume when we’re clear of Black Friday…

###

As we contemplate connectivity, we might recall that it was on this date in 1995 that Microsoft released Internet Explorer 2.0…

Nearly 6 months to the day after Bill Gates sent his Internet Tidal Wave memo recognizing the importance of the Internet, and only 3 months after releasing version 1.0, Microsoft releases Internet Explorer 2.0 for Windows 95 and Windows NT 3.5. IE 2.0 was still based on licensed code from Spyglass Mosaic, but was the first IE version to support now-common features such as SSL, JavaScript, and cookies. It was also the first version to allow the importing of bookmarks from Netscape Navigator, which at the time had a virtual monopoly on the web browser market. This was the first inklings of the “browser war” that was soon to erupt over the next few years.

– source

Antonio Banderas homepage in 1995 (source)

“Engineering is the art of modeling materials we do not wholly understand, into shapes we cannot precisely analyze, so as to withstand forces we cannot properly assess, in such a way that the public has no reason to suspect”*…

… and so, for a very long time, it has been. Consider the case of the inventive Ismail al-Jazarī, a predecessor of Da Vinci…

… Al-Jazarī, who passed away in 1206, served as the chief engineer for the court of the Artuqids in Diyarbakir. His Book of Knowledge of Ingenious Mechanical Devices lives up to its name, detailing lock-like devices for raising water, sophisticated zodiac clocks, avian automata able to produce song, and a showering system for King Salih, who “disliked a servant or slave girl pouring water onto his hands for him”. He invented bloodletting technologies, mischievous fountains, segmental gears, and a chest (sundūq) that featured a security system with four combination dials — presumably a safe for storing valued possessions — and has been subsequently dubbed “the father of robotics”, due to his creation of a life-like butler who could offer guests a hand towel after their ablutions. Al-Jazarī’s contemporaries already recognized his eminence as an engineer, referring to him as unique and unrivaled, learned and worthy. He stood on the shoulders of Persian, Greek, Indian, and Chinese precursors, while Renaissance inventors, in turn, stood on his.

The Book of Knowledge of Ingenious Mechanical Devices contains some fifty mechanical devices divided into six categories: clocks; vessels and figures for drinking sessions; pitchers, basins, and other washing devices; fountains and perpetual flutes; machines for raising water; and a miscellaneous category, where we find a self-closing door. The second category is perhaps the most intriguing, and grants some insight into the extravagant concerns of al-Jazarī’s courtly patrons. One machine — “a standing slave holding a fish and a goblet from which he serves wine to the king” — is programmed to dispense clarified wine every eighth of an hour for a certain period. Numerous similar devices follow: robots that drink from goblets, which are filled from the recycled contents of their stomachs; automaton shaykhs that serve each other wine that each consumes in turn; a boat full of mechanical slave girls that play instruments during drinking parties. Not unlike our “AI assistants”, al-Jazarī’s inventions are never allowed to transcend the category of indentured laborer, reproducing the inequalities of social relations across the human-machine divide.

The illustrations from the Berlin manuscript are notably different than some of its sister specimens, such as the ornate pair of manuscripts held in Leiden. Here the images are mainly in-line illustrations and seem more focused on technical details and inner workings than other versions, which tend to lean toward aesthetic exteriors. Red and yellow predominate, offset by the occasional body of water in indigo blue. Gears and levers are rich in tone, while humanoid figures get left as simple, colorless sketches. To the contemporary viewer, the illustrations invert the power dynamic that is so present in al-Jazarī’s text. Machines come to the foreground; humans are incidental figures, almost irrelevant…

Putting material to work. More– and many more illustrations: “Ismail al-Jazarī’s Ingenious Mechanical Devices,” from @PublicDomainRev.

More of (and on) al-Jazarī’s creations here.

E. H. Brown

###

As we imagine machines, we might spare a thought for Henry Christopher Mance; he died on this date in 1926. An electrical engineer and inventor, he was instrumental in laying the earliest underwater telecom cables (under the Persian Gulf) and developed the Mance method of detecting and locating the positions of defects in submarine cables. But he is better remembered as the inventor of the Mance heliograph (a wireless solar telegraph that signals by flashes of sunlight using Morse code reflected by a mirror), which found wide military, survey, and forest protection application and for which he was knighted.

Signaling with a Mance heliograph, Alaska-Canada border, 1910 (source)
Sir Henry Christopher Mance (source)

“It is the same in love as in war; a fortress that parleys is half taken”*…

The AT&T Long Lines Building, designed by John Carl Warnecke at 33 Thomas Street in Manhattan, under construction ca. 1974.

Further to yesterday’s post on historic battlements, Zach Mortice on a modern fortress that’s become a go-to location for film and television thrillers…

When it was completed in Lower Manhattan in 1974, 33 Thomas Street, formerly known as the AT&T Long Lines Building, was intended as the world’s largest facility for connecting long-distance telephone calls. Standing 532 feet — roughly equivalent to a 45-story building — it’s a mugshot for Brutalism, windowless and nearly featureless. Its only apertures are a series of ventilation hoods meant to hide microwave-satellite arrays, which communicate with ground-based relay stations and satellites in space. One of several long lines buildings designed by John Carl Warnecke for the New York Telephone Company, a subsidiary of AT&T, 33 Thomas Street is perhaps the most visually striking project in the architect’s long and influential career. Embodying postwar American economic and military hegemony, the tower broadcasts inscrutability and imperviousness. It was conceived, according to the architect, to be a “skyscraper inhabited by machines.”

“No windows or unprotected openings in its radiation-proof skin can be permitted,” reads a project brief prepared by Warnecke’s office; the building’s form and dimensions were shaped not by human needs for light and air, but by the logics of ventilation, cooling, and (not least) protection from atomic blast. “As such, the design project becomes the search for a 20th-century fortress, with spears and arrows replaced by protons and neutrons laying quiet siege to an army of machines within.” The purple prose of the project brief was perhaps inspired by the client. AT&T in the 1970s still held its telecom monopoly, and was an exuberant player in the Cold War military-industrial complex. Until 2009, 33 Thomas Street was a Verizon data center. And in 2016, The Intercept revealed that the building was functioning as a hub for the National Security Administration, which has bestowed upon it the Bond-film-esque moniker Titanpointe.

Computers at Titanpointe have monitored international phone calls, faxes and voice calls routed over the internet, and more, hoovering up data from the International Monetary Fund, the World Bank, and U.S. allies including France, Germany, and Japan. 33 Thomas Street, it turns out, is exactly what it looks like: an apocalypse-proof above-ground bunker intended not only to symbolize but to guarantee national security. For those overseeing fortress operations at the time of construction, objects of fear were nuclear-armed Communists abroad and a restive youth population at home, who couldn’t be trusted to obey the diktats of a culture that had raised up some in previously inconceivable affluence; an affluence built on the exploitation and disenfranchisement of people near and far.

By the time the NSA took over, targets were likely to be insurgents rejecting liberal democracy and American hegemony, from Islamic fundamentalists to world-market competitors in China, alongside a smattering of Black Lives Matter activists. For those outside the fortress, in the Nixon era as in the present, the fearful issue was an entrenched and unaccountable fusion of corporate and governmental capability, a power that flipped the switches connecting the world. At the same time, popular culture had begun, in the 1970s, to register a paranoia that has only intensified — the fear that people no longer call the shots. In its monumental implacability, Titanpointe seems to herald a posthuman regime, run by algorithm for the sole purpose of perpetuating its own system.

It is, in other words, a building tailor made for spy movies.

John Carl Warnecke did not realize, of course, that he was storyboarding a movie set…

How (and why) a windowless telecommunications hub in New York City embodying an architecture of surveillance and paranoia became an ideal location for conspiracy thrillers: “Apocalypse-Proof,” from @zachmortice in @PlacesJournal. Fascinating.

Margaret of Valois

###

As we ponder impenetrability, we might recall that it was on this date in 1780, during the American Revolutionary War, that Benedict Arnold, commander of the American fort at West Point, passed plans of the bastion to the British.

Portrait by Thomas Hart, 1776 (source)

“With my tongue in one cheek only, I’d suggest that had our palaeolithic ancestors discovered the peer-review dredger, we would be still sitting in caves”*…

As a format, “scholarly” scientific communications are slow, encourage hype, and are difficult to correct. Stuart Ritchie argues that a radical overhaul of publishing could make science better…

… Having been printed on paper since the very first scientific journal was inaugurated in 1665, the overwhelming majority of research is now submitted, reviewed and read online. During the pandemic, it was often devoured on social media, an essential part of the unfolding story of Covid-19. Hard copies of journals are increasingly viewed as curiosities – or not viewed at all.

But although the internet has transformed the way we read it, the overall system for how we publish science remains largely unchanged. We still have scientific papers; we still send them off to peer reviewers; we still have editors who give the ultimate thumbs up or down as to whether a paper is published in their journal.

This system comes with big problems. Chief among them is the issue of publication bias: reviewers and editors are more likely to give a scientific paper a good write-up and publish it in their journal if it reports positive or exciting results. So scientists go to great lengths to hype up their studies, lean on their analyses so they produce “better” results, and sometimes even commit fraud in order to impress those all-important gatekeepers. This drastically distorts our view of what really went on.

There are some possible fixes that change the way journals work. Maybe the decision to publish could be made based only on the methodology of a study, rather than on its results (this is already happening to a modest extent in a few journals). Maybe scientists could just publish all their research by default, and journals would curate, rather than decide, which results get out into the world. But maybe we could go a step further, and get rid of scientific papers altogether…

A bold proposal: “The big idea: should we get rid of the scientific paper?,” from @StuartJRitchie in @guardian.

Apposite (if only in its critical posture): “The Two Paper Rule.” See also “In what sense is the science of science a science?” for context.

Zygmunt Bauman

###

As we noodle on knowledge, we might recall that it was on this date in 1964 that AT&T connected the first Picturephone call (between Disneyland in California and the World’s Fair in New York). The device consisted of a telephone handset and a small, matching TV, which allowed telephone users to see each other in fuzzy video images as they carried on a conversation. It was commercially-released shortly thereafter (prices ranged from $16 to $27 for a three-minute call between special booths AT&T set up in New York, Washington, and Chicago), but didn’t catch on.

source