Digitally just cities

Neural network. Photo: Kevin Rheese, Licensed under Creative Commons 2.0.

In his seminal book The Myth of the Machine Lewis Mumford wrote that emerging new megatechnics would create a uniform, all-enveloping, super-planetary structure, designed for automatic operation in which man will become a passive, purposeless, machine-conditioned animal. This was in 1967, before anyone could imagine the impact of digital technology.

Today, barely 50 years later, we speak of a digital society, ubiquitous connectivity and artificial intelligence (AI), still not knowing what the impact will be, nor how this technology can be controlled.

Aleksandra Mojsilović, codirector of IBM Science for Social Good adds that if she is awake at night, it’s due to the speed which AI is developing: Think about when cars were first invented: It was the Wild West. Until now, the rules for the development and application of AI are lagging behind.

Six digital technologies that stimulate process automation.

This article is about the impact the latest digital technologies, the underlying motives and the opportunities for society to mitigate negative effects, or use them to become a more humane place. I mainly focus on artificial intelligence, a technology based on artificial neural networks, mimicking the neural networks in our brain (header picture). I will explain how cities can protect their citizen’s interest, not at least through strict legislation on data collection and the reduction of cybercrime. The ultimate question is as that if technology dehumanizes, is it possible to reverse this process?


Circular cities is part seven of a series of essays on how cities can become more humane. That means finding a balance between sustainability, social justice and quality of life. This requires far-reaching choices. Once these choices have been made, it goes without saying that we use smart technologies to achieve these goals.

The essays that have already been published can be found here.


Artificial intelligence

Simply put, autonomous and intelligent technical systems, or artificial intelligence (AI) is the ability of a computer to correctly recognize and name pattern by learning. Such a pattern may be a pedestrian crossing the road in the case of a self-driving car or the face of a specific person in the case of facial recognition. Learning means that the computer itself ‘discovers’ the right distinctions instead of being programmed. The role of people in this process is two-fold. Firstly, by writing an ‘instruction’ (algorithm) and secondly by training the computer to correctly apply this instruction. For example, if the computer in a self-driving car receives negative feedback because it confuses a bicycle and a motorcycle, the device will attempt to redefine the characteristics of both.

All technology companies understand that AI and related technologies are essential to secure their future position. As a result, they invest large amounts of money in research, often through the acquisition of start-ups.

Global merger and acquisitions (to December 4th 2017) related to AI. Source: The Economist.

AI is designed to reduce human interventions in planning, decision-making and the recognition of patterns in big data. In the meantime, concerns about negative consequences are growing, such as breaches of privacy, discrimination, loss of human skills, risks to security of critical infrastructure, and possible negative long-term effects on societal well-being. The benefit of these technologies depends on whether developers are able to align the operation of AI with ethical principles, such as fairness, environmental sustainability, and growth of self-determination.

Facial recognition

One of the most criticized applications of AI is facial recognition: The hasty introduction especially by the police and retail (recognition of shoplifters) is heavily disputed. Elsewhere, I have elaborated on facial recognition, mentioning its lack of accuracy, especially with regards to people of color and women. As a consequence, several cities in the US are banning facial recognition. Recently, Portland has issued a total ban, both in the public sector – in particular the police – and in the private sector.

The distrust of facial recognition and the dubious accuracy of other applications of AI have led to initiatives to improve their design. The influential Institute of Electric and Electronic Engineers (IEEE) took a worldwide initiative with the publication of a comprehensive manual, Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. This is the most comprehensive, crowd-sourced treatise on the ethics of AI and machine learning in general. Building on this seminal work, IBM has compiled its own guide that focuses on five ethical issues:

Accountability
Designers of AI are accountable for considering the system’s impact in the world and not only the companies or governments that invested in its development. 

Value Alignment 
Designers as a team have to consider contextual factors like former experiences, memories, upbringing, and cultural norms which are supposed to influence the outcomes of the process to be automated, as AI obviously lacks acquaintance with these factors. The larger the diversity of the developing team, the better the results.

Explainability
Designers must be able to explain the considerations underlying algorithms in terms that people can understand, because this knowledge is the key for understanding its conclusions and recommendations.

Fairness
Developers of AI systems must be aware of the biases that unconsciously direct their thinking. As a team they must minimize algorithmic bias through constant reflection on the content, both the data and the algorithms.

Examples of unconscious bias (source IBM) .

User Data Rights
Developers must educate users to take control over their data, in compliance with national and international laws, for instance the EU’s General Data Protection-legislation.


Algorithm manager New York

New York City has appointed an algorithm manager to check whether algorithms comply with ethical and legal rules regarding data use and privacy. The mayor took this decision, advised by the Automated Decision Systems Task Force, which was founded as a response of the criticism on the way the New York Police Department applies facial recognition. In Amsterdam, at the request of the city administration, accountancy firm KPMG supervises the quality of algorithms by means of an Artificial Intelligence audit.


Surveillance capitalism

The main reason that companies like Amazon, Facebook and Google invest billions in AI is that they want to influence our purchasing behavior through targeted marketing and therefore they want to know anybody’s shopping behavior to predict and determine customers’ preferences. Traditional advertising has lost its impact, due to the large number of products and services and the enormous variety of individual preferences. Basically, targeted marketing includes that a specific person is advised at the right time to buy a certain product in a particular store, that is, a store that has paid for this service.

A scientific report by Douglas C. Schmidt, professor of computer science at Vanderbilt University, reveals what data Google collects and how. Google knows the preferences of billions of people, knows where they are at any time and continuously gives personalized commercial messages. The next step is for potential customers to receive a text message when approaching a store selling one of their favorite items. Even better is that sellers greet customers in person thanks to facial recognition and surprise them with an irresistable offer.

Personal data of an Android phone user collected by Google during one day. The gray pins represent location data while the phone was not used actively. Chart: Pamela Saxon (Vanderbilit University).

In the case of Amazon, customers are even not required to leave their home at all. The company can – thanks to AI – predict at any time which products or services they are open to and subsequently make an irresistible offer. The reason may also be that the infamous Alexa ‘overheard’ you. 


Sidewalk Labs Toronto

The development process of Quayside, a 12-acre chunk of brownfield land on the edge of downtown Toronto shows the role information technology plays in the development of cities. Elsewhere, I have paid attention to the role of Sidewalk Labs (a sister company of Google) in this project. I was pleased by its progressive urbanism. But at the same time, the company wants to construct a ‘digital layer’ over Quayside. The larger amount of the data gained from residents and visitors, the better Sidewalk Labs will succeed in getting third parties interested to monetize these data. The opposition to Sidewalk Labs’ role in Toronto is growing fast.

Site vision: mixed land use and traffic. Artiest impression: Sidewalks Labs (public domain).

Amazon has succeeded in becoming an all-American icon. Research has shown that Amazon is the second most-trusted institution of any kind in the United States, ahead of the government, the police, and the higher-education system. The U.S. military comes first. 

Because of its reputation, Amazon has a huge impact on purchasing behavior. For instance, it has successfully exploited average American’s fear for burglars by promoting Ring, a doorbell with a built-in camera. Amazon acquired the company that produces the Ring in 2018 for $ 839 million. In the US millions of these safety-systems have been sold.

The aforementioned Alexa too has become an inseparable part of many families and a source of large amounts of personalized information. The moment that Alexa starts with commercial messages is not far away.

I started this article with a reference to Lewis Mumford’s fear of the impact of megatechnics. Today, technologies that ensure that no action goes unnoticed as Mumford predicted are fully operational. The handful of gigantic technological conglomerates that dominate the digital world today, show a shocking resemblance to Mumford’s mega machine.                                                                                                                         

However, for many people these mega machines not only seem absolutely irresistible but also ultimately beneficial. Mumford believed that these two conditions feed into what he called the megatechnics bribe. The technology companies provide people with an impressive range of goods and services, often free of charge.  These pleasantly masks rampant surveillance, environmental destruction, exploitation, and the concentration of wealth in a few hands, that go with it.

The quest for regulation

Platforms such as Amazon and Alibaba not only offer third parties’ products and services via the Internet, they also develop and offer their own products and services. As owners of the network and sellers with the largest share in the market, they dominate all other users of the platform.

They have disastrous consequences for competition, the viability of urban centers, customers’ ecological footprint, and are monopolistic concentrations of economic power and tremendous wealth for their owners and shareholders.

Facebook is another example of a platform. As social network it thrives on its near-monopolistic position, which offers a huge advantage as advertising medium. 

Even in the US, the call for regulation is getting louder. In an article in the New York Times, Chris Hughes, co-founder of Facebook pleas for breaking-up his former love-baby. At least by reversing the acquisition of Instagram and WhatsApp, something that presidential candidate Elisabeth Warren seems to intend.

Hughes refers to the fact that America is built on the idea that power should not be concentrated in any one person, because of its fallibility. A century ago, senator John Sherman said in the Congress: If we will not endure a king as a political power, we should not endure a king over the production, transportation and sale of any of the necessities of life. If we would not submit to an emperor, we should not submit to an autocrat of trade with power to prevent competition and to fix the price of any commodity.

From the eighties in the last century, concentration in every economic sector increased to a previously unknown extent. Facebook is worth 500 billion, about 80% of the world’s social networking revenue.

The power of Facebook compared to other social networks.

Legislation related to Facebook should include the protection of privacy and a transparent policy regarding acceptable speech, and Mark Zuckerberg seems to have no objection towards this: In an op-ed essay in the Washington Post in March 2019, he wrote Lawmakers often tell me we have too much power over speech, and I agree. He goes even farther: Government regulation should not just be on speech, but also on privacy and interoperability, meaning the right of consumers to seamlessly leave one network and transfer their profiles, friend connections, photos and other data to another. His plea for legislation was followed by CEO’s of Google, Microsoft and Amazon.

What they are all trying to prevent is tightening antitrust policy, but such a policy is exactly what is needed.


Perspective

Perspective is an API that is able to calculate the probability that texts from websites or online forums are perceived toxic. To develop this algorithm, huge data sets were used, for example the New York Times comments section. People were asked how the messages made them feel. Their answers went into a machine learning model that ‘learned’ to select messages that are most likely to be experienced as toxic. It is able to distinguish between expressions such as You fu… gay (toxic) and I’m proud to be gay (non-toxic)


Basically, platforms work at their best if customers and retailers meet at just one (virtual) marketplace of one social medium where they can connect to everybody. Just as one electricity grid is preferable for many reasons. This kind of services can be considered as natural monopolies. However, in case of (near-)natural monopolies like Amazon, Facebook and Google a separation is mandatory between the platform as such and the companies who use it to sell their products. This implies huge changes for the companies concerned:

The new Amazon
For Amazon, this includes that the company manages the virtual marketplace and possibly offers storage and transport services. However, it is then forbidden to sell products of its own. Every company can rent a place on the platform and use the data of its own customers if they agree.

The new Facebook
Adult Facebook users may be asked a fair payment for networking, in exchange for not collecting their data and not proving them with advertisements. From children data should never be saved nor advertisements being displayed. Facebook can be allowed to collect data from adults and offer them (personalized) advertisements in exchange for free use of the network service.

The new Google
As mentioned earlier, I am pleased with the availability of Google’s search engines, its maps, included Google Earth, and its email facilities. I even appreciate it if the company provides on request an overview of drugstores, plumbers and the like in my neighborhood. Therefore, I want to pay for this service under the condition that the company refrains from collecting my personal data and offering advertisements to me. Those who wish can allow the company to collect their data and show personalized advertisements in exchange for free use of its services.


Cities for Digital Rights Coalition

In November 2018, Amsterdam, Barcelona, and New York City launched the Cities for Digital Rights coalition. It stands for reliable and secure digital services and infrastructures. The coalition wants to support our communities through a five-point agenda of equal access to the internet, privacy and data protection, transparency and non-discriminatory algorithms, diversity and inclusion, and open and ethical digital service standards.

The short video below shows the launching of the Digital Rights Coalition.

Content not available.
Please allow cookies by clicking Accept on the banner

The role of cities as a regulator; Amsterdam as example

In Europe, regulation is progressing: The European Union issued the far-reaching General Data Protection Regulation (GDPR). You can find a summary of its content here and an overview of its guiding principles below.

Local authorities can play an important role in maintaining net neutrality and open data standards, protecting digital rights, and combat cybercrime. Regardless of their role in enabling fast Internet for all citizens, improving digital self-reliance and resilience, making  digital services available, installing sensors to improve the quality of the environment and empowering digital art and the creative industry. Topics that are discussed in another essay.

Below, I will focus on the role of the city in the debate on AI and algorithms, the protection of the privacy of the citizens in general, the regulation of data ownership, the preference for open software and the struggle against cybercrime.

Each topic is illustrated with activities, expected in 2019, by the City of Amsterdam, as mentioned in the publication A Digital City for and by Everyone. Agenda for the digital City version 1.0.

Supervision and debate on the role of artificial intelligence and algorithms
Cities already use AI in order to analyze countless data and make policies. Algorithms are implemented when streamlining services, prioritizing operations, and even predicting when restaurant inspections, road work, or building permits will be needed.

It is of utmost importance to realize that none of these activities take place without human intervention. Algorithms are designed, controlled and guided by humans, although few can understand the relationship between this check and the outcomes of data analysis. I already emphasized that the ability to explain this relationship must be part of AI developers’ competences. It is useful for (public) organizations to set up an advisory board that can control how it works. 


The algorithms toolkit

To help reduce the biases in algorithms, the Center of Government Excellence at Johns Hopkins University, recently released an algorithms toolkit for local government leaders.

Its main goal is to ensure that automated decisions are fair and unintentional harm is minimized. The toolkit helps local leaders to proactively ask specific questions to quantify risks and also provides recommendations on ways to deal those risks.

Neuron netwerk. Source: The Algorithm Toolkit, John Hopkins University.

Some applications of AI are already scrutinized. I already mentioned facial recognition practices, because of the regular occurrence of bias of their designers and the datasets. Another example is the detection of fraud; a legitimate government task too, but here also transparency is needed.

Activities foreseen in the Amsterdam Agenda for the digital City

  • Encourage public debate on ethical questions with respect to AI.
  • Independent party to audit algorithms.

Transparency of data collection by government itself: privacy by design

Many adhere to the principle that in a free country people have the right to move around in the city, without being observed and registered except for law enforcement reasons.

In many cases, for instance crowd control at large-scaled events, personal information is not needed and here ‘privacy by design’ is in place. One of the rules are that observations are minimalist. For example, counting people from behind, or cars from above.

Data collection policy involves seeking consent from citizens, maintaining checks and balances and, in the case of surveillance, establishing rules for those who collect and process information.

Activities foreseen in the Amsterdam Agenda for the digital City

  • Public register of all sensors.
  • Increase technological knowledge.
  • Research on digital resilience in young children.

Data ownership; availability and admission to data (open data)
The creation of a collection of images of faces by private individuals prohibited by law. the government can only do this under specific circumstances and for set goals. In principle, citizens are the owners of all personal data, although some are obliged to be surrendered to the municipality on the basis of the law on personal registration. This information is public at aggregate level. This applies to data collected by the government, but should also apply to private companies. Trading personal data without the owner’s permission is also prohibited. In practice, this permission is implicitly requested and given.

Activities foreseen in the Amsterdam Agenda for the digital city:

  • Establishing digital rights.
  • Supporting cooperations that offer alternatives to platform monopolies.
  • Introduction privacy by design.
  • Digital identity to enable citizens to reduce sharing of data.
  • Develop strategy on data minimalization, sovereignty, data commons and open data.

Decode

Decode is an EU project that creates tools which will give people ownership of their data. These tools combine blockchain technology with attribute-based cryptography to enable the owner of the data to manage these. The video below is a short introduction to the project.

Content not available.
Please allow cookies by clicking Accept on the banner

Private data can made searchable, but only parties who are entitled to have admittance or who have received permission to the owner of the data are given access. This new concept of data rights also applies to data sent to or used by Internet of Things (IoT) objects.


Open software
The use of open source software is recommended to prevent ‘lock in’ as a result of dependence on suppliers. Proprietary software understandably impedes switching to another provider. In addition, thanks to built-in functions, commercial software suppliers can collect as much information from their customers as they want. In the case of open software, the code and the way in which it is used is public. Being available for free does not mean that this software is cheap due to the need for customization.

Activities foreseen in the Amsterdam Agenda for the digital City

  • Making as many as possible data open and sharable on a city platform.
  • Appointing an information officer to maintain principles of ‘privacy by design’ and ‘openness unless’

Cybersecurity
Apart from high-profile policing, cities and users can do a lot to increase cybersecurity.

The Internet of Things connects more and more devices to the internet, whose security level is usually very low because they are designed to be cheap. Regulators have a role in improving standards.

Striking data breaches at international companies receive a lot of media coverage, but cybercriminals are increasingly focusing on community groups, schools, small businesses, and municipal governments, which in general have a low level of security. By taking five critical steps and informing the general public to do the same, there is a great potential to improve security.

In addition, cities need to improve resilience with regard to denial-of-service attacks, for example, residents who are prevented to enter (or to leave) certain buildings, decommissioning remote systems such as traffic lights and fire extinguishing water pipes.  

Companies and institutions must assess threats and develop controls to the most critical. (Source: European Union Agency for Network and Information Security).

Activities foreseen in the Amsterdam Agenda for the digital City

  • Secure WIFI-services for citizens and visitors in public buildings and busy places.
  • Fight against cybercrime to protect vital infrastructure and administrative stability.
  • Production of a social service handbook to offer digital amenities in a save way.

The challenge of the digital just city

In this article, care for privacy, prevention of being tracked and protection against cybercrime are seen as part of the digital rights of citizens from a humane perspective. At the same time, I mentioned the fact that in the eyes of many, Amazon is one of the most reliable institutions in the US. I am afraid this is an excellent example of what Lewis Mumford’s calls mega technical bribe. Amazon meets the material desires of American consumers. This makes them indifferent to their privacy, their sovereignty as a consumer and ultimately their own jobs in exchange for Amazon’s low prices, rich choice and fast delivery.  For those who preach consumerism as the holy grail, this is not a bad deal, but other should know better. In my search for a humane city, where principles such as sustainability, justice and quality of life prevail, this tendency is daunting.

The growing penetration of mega-technology companies into everyone’s life with the intention of shaping consumer behavior is equally worrying because people need to reduce their common ecological footstep, increase social capital and refocus culture on non-material qualities of life.

Unfortunately, the poorest part of the world population is excluded from the increase in material prosperity.

Governments have an important task. They need to ensure that products are sold at fair prices, including compensation for CO2 emissions, and that profits are used for society as a whole rather than for a handful of mega-rich tycoons. This opens the way for cities that are both humane and prosperous.

Below I summarize how just digital technology can support the development of humane cities.

Just application of digital technology to support the development of humane cities

  1. Local authorities can contribute to fair and safe digital services by enabling fast internet for all its citizens, improving digital self-reliance and resilience, making digital services available, installing sensors to improve the quality of the environment, digital art and creatively strengthening industries, protecting net neutrality, maintaining open data standards, protecting privacy, critically assessing AI applications and effectively preventing and combating cybercrime.
  1. Fair competition is out of reach if near-monopolistic platform owners are also influential sellers on their own platforms.
  1. Legislation related to monopolistic technology companies such as Facebook, Google and Amazon must protect privacy, prohibit toxic language, increase interoperability to govern and make monopolistic concentrations impossible and eliminated.
  1. Companies offering free internet services in exchange for access to user data must also offer the same services for a fee, without collecting data and not placing advertisements.
  2. Artificial intelligence must be aligned with social values ​​and ethical principles, including fairness, environmental sustainability, and the right to self-determination. Premature application of these and other technologies in favor of quick profits or results should be avoided.
  1. The use of artificial intelligence for the analysis and supervision of processes that fall under the jurisdiction of the city government can contribute to principles such as diversity, respect, equality, quality of life and sustainability if their development is subject to design principles such as transparency, accountability and fairness.
  1. Forms of digital connectivity, including the remote monitoring of smartphones, have been recorded and agreed in a city-wide discussion in which pros and cons for different groups within the population are announced without hesitation.
  1. Citizens are in principle the owners of their personal data. Consequently, they must be enabled to authorize the government and other parties to collect personalized data, with the exception of data whose collection is regulated by law.
  1. The local government will be restrictive with regard to the collection of personal data. The collection of data, according to the ‘privacy by design’ principle, is aimed at enforcing the law, improving the well-being of the city as a whole and serving the legitimate interests of the citizens themselves.
  1. All non-personal data collected in the public realm is ‘open’; which means they it is available in a public data portal, unless there are legal objections.
  1. Interoperability is a guiding principle In the functioning of computer systems. In general, this principle will be realized by using open software.
  1. Providers of (public) Wi-Fi are responsible for the safety of its use, which will limit access to the Internet without secure registration. Similarly, ot is mandatory that equipment connected to the Internet has a certificate for cyber quality.
  1. Companies and organizations that neglect the protection of their hardware, software and data can be held liable for the damage that cyber criminals cause to third parties.

Written by Professor Herman van den Bosch, Professor at Open University of The Netherlands.

Header photo (a neural network): Kevin Rheese, Licensed under Creative Commons 2.0.

1 Comment

Leave a Reply

Your email address will not be published. Required fields are marked *