The vision of a cognitive city places the human at the center of the cognitive experience. Such a city must have the capability to understand an individual’s needs, adapt city systems and infrastructure accordingly, and provide personalized city services that are trusted, secure, efficient, and seamless.

AI is key to building a secure, accurate, and private model of the individual, validated and controlled by the individual himself or herself through the intermediary of trusted digital agents. AI also enables the orchestration of multiple city services to serve an individual’s evolving needs, while multimodal AI facilitates simple and intuitive interaction between the user and the city.

AR provides a human-centric, natural, and seamless interface to these personalized cognitive services. By enabling a digital layer that is exponentially more adaptable than the physical one, AR also increases the adaptability and resilience of the city systems and architecture.

The combination of these AI and AR capabilities will not only transform how we experience existing services but will drive the creation of completely new mixed-reality experiences that do not exist today.

These capabilities should be fully embedded in the built environment, ideally from the planning and design phase. Indeed they provide architects, urban designers, and urban planners with powerful tools with which to create cognitive cities based on mixed realities.

A City Operating System

The full integration of AI, AR, and the built environment can be achieved through a city operating system, which integrates these capabilities with the physical infrastructure, and provides secure access to them for city operations but also for businesses and individuals. What would a cognitive experience in such an urban environment feel like? Here are some examples:

  • I stroll down the boulevard, my AR glasses connected through Y-FI to my XG phone, leading me through doors of perception, connecting my city metaverse channels.
  • I zap through my subscribed metaverses checking the evening vibes, the AR overlays instantly changing their content, shapes, and styles according to the selected channel.
  • I navigate through the city, crossing the road with confidence, as the city AR rules engine guides me along secure pathways, pushing the AR content to safe zones so it doesn’t block my view of oncoming traffic and crowds.
  • My personal digital muse populates my surroundings with images and data according to my interests, overlaying these visions on the city spatial anchors, allocated by the city AR rules-
    engine.
  • Inspired, I imagine a song and guide the digital muse to create AR content images to fit the melody and drape them like Banksy graffiti over the urban fabric of my neighborhood. Satisfied that this layer of frozen humanity expresses my song I publish it to the shared reality space of my metaverse channels.
  • Initially, the creative process will feel like talking to your imagination through your digital muse, but in time the muse could scan your creative thoughts directly from your brain, with your permission, and load them into the mixed reality city.

Unseen wonders could be loaded from our collective imaginations to populate cognitive cities like fireworks from our AR + AI boosters, allowing us to create epic mixed-reality experiences without the need for any substance beyond our digital endpoints.

Unchained

Our perception of reality in cognitive cities will be unchained from the tethers that restrict us today, allowing our full human creativity to roam freely. These are the 7 keys to unlock the chains:

  1. City-wide Spatial Anchor Systems: allow anyone to post AR content anywhere in the city, down to centimeter-level precision.
  2. City AR Rules-Engines: regulate which type of real-time AR content can be posted in each spatial location, combining public safety and zoning requirements with property digital rights.
  3. Mixed Reality Urban Design: Cognitive experience labs enable architects and urban designers to integrate digital overlays into their urban design process, taking the natural, human-centric viewpoint experienced through AR glasses as their baseline, rather than raised smartphones or tablets.
  4. Digital Public Spaces: allocated by urban planners: community areas equipped with urban furniture located around large virtual screens to facilitate private or shared gatherings where you choose which AR group to join. In all cases, you will be able to perceive and interact with everyone around you.
  5. AR + AI Data Halos: enable us to visualize what private data we are sharing with whom at any moment as a colored “halo” providing unprecedented levels of transparency and trust.
  6. Wireless Power Points: positioned throughout the city to remove the need to carry heavy battery packs or interrupt our daily journeys to power up.
  7. City Operating System: coordinates and provides equitable access to all of the above, implementing a software architecture that eliminates the need for multiple software upgrades so that the creation of digital assets is more efficient and sustainable than the creation of physical assets.

So there you have it. Cognitive cities could be one valuable endpoint from the many ways that AI, AR IoT, and several other technologies are converging. It will be a moving target and a developing story as these technologies, and others that emerge, continue to evolve in parallel.

Mansoor Hanif is a board advisor for Darabase.


More from AR Insider…