(Roughly) Daily

Posts Tagged ‘law

“But if thought corrupts language, language can also corrupt thought”*…

Detail from the Constitution of India, 1949

Bulwer-Lytton had nothing on Indian jurists…

The English language arrived in India with the British colonists of the 17th century, giving rise to unique genres and variants, including some that characterize formal communications on the subcontinent to this day. Among these, the derogatory term “Babu English” was originally used by the British to describe the overwrought officialese of “babus” or Indian bureaucratsa style described at the British Library as “aspiring to poetic heights in vocabulary and learning, despite being full of errors.” 

“Babu English is the much caricatured flowery language of… moderately educated clerks and others who are less proficient in formal English than they realise,” wrote Rajend Mesthrie in English in Language Shift (1993). His examples include the clerk who asked his employers for leave because ‘the hand that rocked the cradle has kicked the bucket’; the job applicant “bubbling with zeal and enthusiasm to serve as a research assistant”; and a baroque acknowledgement from a PhD thesis: “I consider it to be my primordial obligation to humbly offer my deepest sense of gratitude to my most revered Garuji and untiring and illustrious guide professor . . . for the magnitude of his benevolence and eternal guidance.”

The modern form of Babu English turns up most frequently in the language of India’s legal system. 

Take for example the 2008 case of 14-year-old Aarushi Talwar, who was killed, together with a housekeeper, Hemraj, in the Talwar family home in Delhi; the murder rocked the nation. In 2013, a trial court ruled that the victims had been murdered by the girl’s parents:

The cynosure of judicial determination is the fluctuating fortunes of the dentist couple who have been arraigned for committing and secreting as also deracinating the evidence of commission of the murder of their own adolescent daughter—a beaut damsel and sole heiress Ms Aarushi and hapless domestic aide Hemraj who had migrated to India from neighbouring Nepal to eke out living and attended routinely to the chores of domestic drudgery at the house of their masters.” 

Had the judge accidentally inhaled a thesaurus? With its tormented syntax and glut of polysyllabic words, the judgment is a clear descendant and example of today’s Babu prose. In May 2016, a landmark judgment on criminal defamation written by a future Chief Justice pushed into new stylistic directions with phrases such as “proponements in oppugnation” and “made paraplegic on the mercurial stance.”

“It seems that some judges have unrealised literary dreams,” one former judge told me. “Maybe it’s a colonial hangover, or the feeling that obfuscation is a sign of merit… It can then become a 300-page judgment, just pontificating.”

Judges also retain a tendency to also quote scripture, allude to legends and myths, and throw in a dash of Plato, Shakespeare or Dickens. Some trace the legacy of flowery judgments to Justice Krishna Iyer, a pioneering and influential Supreme Court judge who served a seven-year term in the seventies. (“You had to perhaps sit with a dictionary to understand some [of his] judgments,” one lawyer remarked.)

But the former judge pointed out that this isn’t just a problem bedevilling judgments written in English. Even lower court judgments written in Hindi, he said, often deploy “words that were in vogue in Mughal times… It’s a problem of formalism.”

… 

Wherefore, qua, bonum: decrypting Indian legalese“: a colonial hangover, or unrealized literary dreams? Mumbai-based @BhavyaDore explores.

* George Orwell, Politics and the English Language

###

As we choose our words carefully, we might send passionate birthday greetings to Mary Barbara Hamilton Cartland; she was born on this date in 1901. A paragon of prolixity, Barbara Cartland wrote biographies, plays, music, verse, drama, and operetta, as well as several health and cook books, and many magazine articles; but she is best remembered as a romance novelist, one of the most commercially successful authors worldwide of the 20th century.

Her 723 novels were translated into 38 languages. and she continues to be referenced in the Guinness World Records for the most novels published in a single year (1977). Estimates of her sales range from 750 million copies to over 2 billion.

Source

Written by (Roughly) Daily

July 9, 2021 at 1:00 am

“Doing research on the Web is like using a library assembled piecemeal by pack rats and vandalized nightly”*…

But surely, argues Jonathan Zittrain, it shouldn’t be that way…

Sixty years ago the futurist Arthur C. Clarke observed that any sufficiently advanced technology is indistinguishable from magic. The internet—how we both communicate with one another and together preserve the intellectual products of human civilization—fits Clarke’s observation well. In Steve Jobs’s words, “it just works,” as readily as clicking, tapping, or speaking. And every bit as much aligned with the vicissitudes of magic, when the internet doesn’t work, the reasons are typically so arcane that explanations for it are about as useful as trying to pick apart a failed spell.

Underpinning our vast and simple-seeming digital networks are technologies that, if they hadn’t already been invented, probably wouldn’t unfold the same way again. They are artifacts of a very particular circumstance, and it’s unlikely that in an alternate timeline they would have been designed the same way.

The internet’s distinct architecture arose from a distinct constraint and a distinct freedom: First, its academically minded designers didn’t have or expect to raise massive amounts of capital to build the network; and second, they didn’t want or expect to make money from their invention.

The internet’s framers thus had no money to simply roll out a uniform centralized network the way that, for example, FedEx metabolized a capital outlay of tens of millions of dollars to deploy liveried planes, trucks, people, and drop-off boxes, creating a single point-to-point delivery system. Instead, they settled on the equivalent of rules for how to bolt existing networks together.

Rather than a single centralized network modeled after the legacy telephone system, operated by a government or a few massive utilities, the internet was designed to allow any device anywhere to interoperate with any other device, allowing any provider able to bring whatever networking capacity it had to the growing party. And because the network’s creators did not mean to monetize, much less monopolize, any of it, the key was for desirable content to be provided naturally by the network’s users, some of whom would act as content producers or hosts, setting up watering holes for others to frequent.

Unlike the briefly ascendant proprietary networks such as CompuServe, AOL, and Prodigy, content and network would be separated. Indeed, the internet had and has no main menu, no CEO, no public stock offering, no formal organization at all. There are only engineers who meet every so often to refine its suggested communications protocols that hardware and software makers, and network builders, are then free to take up as they please.

So the internet was a recipe for mortar, with an invitation for anyone, and everyone, to bring their own bricks. Tim Berners-Lee took up the invite and invented the protocols for the World Wide Web, an application to run on the internet. If your computer spoke “web” by running a browser, then it could speak with servers that also spoke web, naturally enough known as websites. Pages on sites could contain links to all sorts of things that would, by definition, be but a click away, and might in practice be found at servers anywhere else in the world, hosted by people or organizations not only not affiliated with the linking webpage, but entirely unaware of its existence. And webpages themselves might be assembled from multiple sources before they displayed as a single unit, facilitating the rise of ad networks that could be called on by websites to insert surveillance beacons and ads on the fly, as pages were pulled together at the moment someone sought to view them.

And like the internet’s own designers, Berners-Lee gave away his protocols to the world for free—enabling a design that omitted any form of centralized management or control, since there was no usage to track by a World Wide Web, Inc., for the purposes of billing. The web, like the internet, is a collective hallucination, a set of independent efforts united by common technological protocols to appear as a seamless, magical whole.

This absence of central control, or even easy central monitoring, has long been celebrated as an instrument of grassroots democracy and freedom. It’s not trivial to censor a network as organic and decentralized as the internet. But more recently, these features have been understood to facilitate vectors for individual harassment and societal destabilization, with no easy gating points through which to remove or label malicious work not under the umbrellas of the major social-media platforms, or to quickly identify their sources. While both assessments have power to them, they each gloss over a key feature of the distributed web and internet: Their designs naturally create gaps of responsibility for maintaining valuable content that others rely on. Links work seamlessly until they don’t. And as tangible counterparts to online work fade, these gaps represent actual holes in humanity’s knowledge…

The glue that holds humanity’s knowledge together is coming undone: “The Internet Is Rotting.” @zittrain explains what we can do to heal it.

(Your correspondent seconds his call to support the critically-important work of The Internet Archive and the Harvard Library Innovation Lab, along with the other initiatives he outlines.)

* Roger Ebert

###

As we protect our past for the future, we might recall that it was on this date in 1937 that Hormel introduced Spam. It was the company’s attempt to increase sales of pork shoulder, not at the time a very popular cut. While there are numerous speculations as to the “meaning of the name” (from a contraction of “spiced ham” to “Scientifically Processed Animal Matter”), its true genesis is known to only a small circle of former Hormel Foods executives.

As a result of the difficulty of delivering fresh meat to the front during World War II, Spam became a ubiquitous part of the U.S. soldier’s diet. It became variously referred to as “ham that didn’t pass its physical,” “meatloaf without basic training,” and “Special Army Meat.” Over 150 million pounds of Spam were purchased by the military before the war’s end. During the war and the occupations that followed, Spam was introduced into Guam, Hawaii, Okinawa, the Philippines, and other islands in the Pacific. Immediately absorbed into native diets, it has become a unique part of the history and effects of U.S. influence in the Pacific islands.

source

“We can learn from history, but we can also deceive ourselves when we selectively take evidence from the past to justify what we have already made up our minds to do”*…

A logistical note to those readers who subscribe by email: Google is discontinuing the Feedburner email service that (Roughly) Daily has used since its inception; so email will now be going via Mailchimp. That should be relatively seamless– no re-subscription required– but there may be a day or two of duplicate emails, as I’m not sure how quickly changes take effect at Feedburner. If so, my apologies. For those who don’t get (Roughly) Daily in their inboxes but would like to, the sign-up box is to the right… it’s quick, painless, and can, if you change your mind, be terminated with a click. And now, to today’s business…

Historians across the country are criticizing Texas House Bill 2497—which, after Gov. Greg Abbott signed it on Monday, establishes the “Texas 1836 Project”—as yet another rhetorical volley in the culture wars, aimed at inflaming already-high tensions and asserting partisan political power. And they’re not wrong.

But as a historian, a Texas history professor, and a proud born-and-raised Texan, I applaud the new law’s call to “promote awareness” of the founders and founding documents of Texas. For teachers, this is an opportunity to read and analyze history with students. And speaking from my own experience, there’s one thing I can tell you: It’s not going to turn out how the politicians who applauded at the signing ceremony think it will.

H.B. 2497 mandates only two things. First, it calls for the creation of a nine-member advisory committee “to promote patriotic education” and Texas values. Second, it requires the committee to provide a pamphlet to the Texas Department of Public Safety, which will give an overview of Texas history and explain state policies that “promote liberty and freedom.” The DPS must distribute this pamphlet to everyone who receives a new Texas driver’s license. Another bill, H.B. 3979, which bars teachers from linking slavery or racism to the “true founding” or “authentic principles” of the United States, is now on Abbott’s desk.

The text of H.B. 2497 is itself relatively tame. It wants to promote history education—a cause that every history teacher would champion. But the context of the bill is much more troublesome. Abbott and much of the Republican-led Texas Legislature have joined a battalion of state leaders across the country who have declared war on ideas they believe aim to destroy society. They’ve identified two scapegoats: the New York Times’ 1619 Project and critical race theory, or CRT, a set of ideas coming from legal academia that is rarely directly taught in K–12 and college classrooms but has become a favorite dog whistle for the right. (If you’ve lost track of the many anti-CRT/1619 bills in play across the country, the situation is outlined in this New York Times piece from earlier this month.)

Enter the 1836 Project, and Greg Abbott’s rallying cry as he signed the bill: “Foundational principles” and “founding documents”! As a history professor, I say we take Abbott up on that challenge, especially the “documents” part. Time to start reading!

Let’s read Texas’ single most foundational document, the 1836 Constitution of the Republic of Texas. We will find several values familiar to present-day Texans: divided government, religious freedom, and the right to bear arms. But we will also find some “values” that don’t track very well in 2021. That it was illegal for either Congress or an individual to simply emancipate a slave. That even free Black people could not live in Texas without specific permission from the state. That “Africans, the descendants of Africans, and Indians” had no rights as citizens.

I know these historical documents are opportunities for education because I teach them all the time. Every semester that I teach Texas history at Southern Methodist University, we read these documents (and many more). And every semester, without fail, I have students respond in two ways: frustration and enlightenment. After reading the 1836 Texas Constitution’s enshrinement of racialized citizenship, they’re exasperated: “Why didn’t anyone teach us this before? I thought the Alamo was all about freedom.” When we read Texas’ reasons for secession in 1861, some can hardly believe it. They’ve always been taught Texans joined the Confederacy to defend their family or states’ rights, not because of an explicit devotion to maintaining a society based on racial subjugation.

I’m not an award-winning teacher. I don’t have any elaborate tricks up my sleeve, and I’ve never asked students to read any academic writing on CRT. And yet it’s my great joy every semester to watch students leave the class more aware of injustices, past and present. They’ve read for themselves. They’ve learned. They’ve changed.

So thank you, Greg Abbott. I know that you mean for the 1836 Project to manufacture a certain kind of citizen, one who joins you in the fight against the dark forces of CRT, 1619, and the very idea of “systemic racism.” But I can assure you—by insisting on a strategy that encourages teachers to read and discuss Texas primary sources, you have made a fatal error. You’ve charted a course to lead students of history to one destination, but the map will bring them straight to the places you’re trying to hide. Everything is right there in the documents, for everyone to see…

The “Texas 1836 Project” is a state-mandated effort to promote “Texas exceptionalism”– and counter CRT. But it may not work out as its Republican sponsors plan… A chance to teach Texas’ “founding documents”? This historian says, “Yes, please!” SMU professor Brian Franklin (@brfranklin4) explains why “The 1836 Project Is an Opportunity.”

For more background: “Texas’ 1836 Project aims to promote “patriotic education,” but critics worry it will gloss over state’s history of racism.”

[image above: source]

Margaret MacMillan 

###

As we listen for the backfire, we might recall that it was on this date in 1215 that King John affixed his seal to the Magna Carta…  an early example of unintended consequences:  the “Great Charter” was meant as a fundamentally reactionary treaty between the king and his barons, guaranteeing nobles’ feudal rights and assuring that the King would respect the Church and national law.  But over succeeding centuries, at the expense of royal and noble hegemony, it became a cornerstone of English democracy– and indeed, democracy as we know it in the West.

 source

Written by (Roughly) Daily

June 15, 2021 at 1:01 am

“Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say”*…

There’s a depressing sort of symmetry in the fact that our modern paradigms of privacy were developed in response to the proliferation of photography and their exploitation by tabloids. The seminal 1890 Harvard Law Review article The Right to Privacy—which every essay about data privacy is contractually obligated to cite—argued that the right of an individual to object to the publication of photographs ought to be considered part of a general ‘right to be let alone’.

30 years on, privacy is still largely conceived of as an individual thing, wherein we get to make solo decisions about when we want to be left alone and when we’re comfortable being trespassed upon. This principle undergirds the notice-and-consent model of data management, which you might also know as the pavlovian response to click “I agree” on any popup and login screen with little regard for the forty pages of legalese you might be agreeing to.

The thing is, the right to be left alone makes perfect sense when you’re managing information relationships between individuals, where there are generally pretty clear social norms around what constitutes a boundary violation. Reasonable people can and do disagree as to the level of privacy they expect, but if I invite you into my home and you snoop through my bedside table and read my diary, there isn’t much ambiguity about that being an invasion.

But in the age of ✨ networked computing ✨, this individual model of privacy just doesn’t scale anymore. There are too many exponentially intersecting relationships for any of us to keep in our head. It’s no longer just about what we tell a friend or the tax collector or even a journalist. It’s the digital footprint that we often unknowingly leave in our wake every time we interact with something online, and how all of those websites and apps and their shadowy partners talk to each other behind our backs. It’s the cameras in malls tracking our location and sometimes emotions, and it’s the license plate readers compiling a log of our movements.

At a time when governments and companies are increasingly investing in surveillance mechanisms under the guise of security and transparency, that scale is only going to keep growing. Our individual comfort about whether we are left alone is no longer the only, or even the most salient part of the story, and we need to think about privacy as a public good and a collective value.

I like thinking about privacy as being collective, because it feels like a more true reflection of the fact that our lives are made up of relationships, and information about our lives is social and contextual by nature. The fact that I have a sister also indicates that my sister has at least one sibling: me. If I took a DNA test through 23andme I’m not just disclosing information about me but also about everyone that I’m related to, none of whom are able to give consent. The privacy implications for familial DNA are pretty broad: this information might be used to sell or withhold products and services, expose family secrets, or implicate a future as-yet-unborn relative in a crime. I could email 23andme and ask them to delete my records, and they might eventually comply in a month or three. But my present and future relatives wouldn’t be able to do that, or even know that their privacy had been compromised at all.

Even with data that’s less fraught than our genome, our decisions about what we expose to the world have externalities for the people around us. I might think nothing of posting a photo of going out with my friends and mentioning the name of the bar, but I’ve just exposed our physical location to the internet. If one of my friends has had to deal with a stalker in their past, I could’ve put their physical safety at risk. Even if I’m careful to make the post friends-only, the people I trust are not the same as the people my friends trust. In an individual model of privacy, we are only as private as our least private friend.

Amidst the global pandemic, this might sound not dissimilar to public health. When I decide whether to wear a mask in public, that’s partially about how much the mask will protect me from airborne droplets. But it’s also—perhaps more significantly—about protecting everyone else from me.

Data collection isn’t always bad, but it is always risky. Sometimes that’s due to shoddy design and programming or lazy security practices. But even the best engineers often fail to build risk-free systems, by the very nature of systems.

Systems are easier to attack than they are to defend. If you want to defend a system, you have to make sure every part of it is perfectly implemented to guard against any possible vulnerabilities. Oftentimes, trying to defend a system means adding additional components, which just ends up creating more potential weak points. Whereas if you want to attack, all you have to do is find the one weakness that the systems designer missed. (Or, to paraphrase the IRA, you only have to be lucky once.)

This is true of all systems, digital or analog, but the thing that makes computer systems particularly vulnerable is that the same weaknesses can be deployed across millions of devices, in our phones and laptops and watches and toasters and refrigerators and doorbells. When a vulnerability is discovered in one system, an entire class of devices around the world is instantly a potential target, but we still have to go fix them one by one.

This is how the Equifax data leak happened. Equifax used a piece of open source software that had a security flaw in it, the people who work on that software found it and fixed it, and instead of diligently updating their systems Equifax hit the snooze button for four months and let hackers steal hundreds of millions of customer records. And while Equifax is definitely guilty of aforementioned lazy security practices, this incident also illustrates how fragile computer systems are. From the moment this bug was discovered, every server in the world that ran that software was at risk.

What’s worse, in many cases people weren’t even aware that their data was stored with Equifax. If you’re an adult who has had a job or a phone bill or interacted with a bank in the last seven years, your identifying information is collected by Equifax whether you like it or not. The only way to opt out would have been to be among the small percentage of overwhelmingly young, poor, and racialized people who have no credit histories, which significantly limits the scope of their ability to participate in the economy. How do you notice-and-consent your way out of that?

There unfortunately isn’t one weird trick to save democracy, but that doesn’t mean there aren’t lessons we can learn from history to figure out how to protect privacy as a public good. The scale and ubiquity of computers may be unprecedented, but so is the scale of our collective knowledge…

Read the full piece (and you should) for Jenny Zhang‘s (@phirephoenix) compelling case that we should treat– and protect– privacy as a public good, and explanation of how we might do that: “Left alone, together.” TotH to Sentiers.

[image above: source]

* Edward Snowden

###

As we think about each other, we might recall that it was on this date in 1939 that the first government appropriation was made to the support the construction of the Harvard Mark I computer.

Designer Howard Aiken had enlisted IBM as a partner in 1937; company chairman Thomas Watson Sr. personally approved the project and its funding. It was completed in 1944 (and put to work on a set war-related tasks, including calculations– overseen by John von Neumann— for the Manhattan Project). 

The Mark I was the industry’s largest electromechanical calculator… and it was large: 51 feet long, 8 feet high, and 2 feet deep; it weighed about 9,445 pounds  The basic calculating units had to be synchronized and powered mechanically, so they were operated by a 50-foot (15 m) drive shaft coupled to a 5 horsepower electric motor, which served as the main power source and system clock. It could do 3 additions or subtractions in a second; a multiplication took 6 seconds; a division took 15.3 seconds; and a logarithm or a trigonometric function took over a minute… ridiculously slow by today’s standards, but a huge advance in its time.

source

“We must be free not because we claim freedom, but because we practice it”*…

 

algorithm

 

There is a growing sense of unease around algorithmic modes of governance (‘algocracies’) and their impact on freedom. Contrary to the emancipatory utopianism of digital enthusiasts, many now fear that the rise of algocracies will undermine our freedom. Nevertheless, there has been some struggle to explain exactly how this will happen. This chapter tries to address the shortcomings in the existing discussion by arguing for a broader conception/understanding of freedom as well as a broader conception/understanding of algocracy. Broadening the focus in this way enables us to see how algorithmic governance can be both emancipatory and enslaving, and provides a framework for future development and activism around the creation of this technology…

From a pre-print of John Danaher‘s (@JohnDanaher) chapter in the forthcoming Oxford Handbook on the Philosophy of Technology, edited by Shannon Vallor: “Freedom in an Age of Algocracy “… a little dense, but very useful.

[image above: source]

* William Faulkner

###

As we meet the new boss, same as the old boss, we might recall that it was on this date in 1962 that telephone and television signals were first relayed in space via the communications satellite Echo 1– basically a big metallic balloon that simply bounced radio signals off its surface.  Simple, but effective.

Forty thousand pounds (18,144 kg) of air was required to inflate the sphere on the ground; so it was inflated in space.  While in orbit it only required several pounds of gas to keep it inflated.

Fun fact: the Echo 1 was built for NASA by Gilmore Schjeldahl, a Minnesota inventor probably better remembered as the creator of the plastic-lined airsickness bag.

200px-Echo-1 source

 

Written by (Roughly) Daily

February 24, 2020 at 1:01 am

%d bloggers like this: