
Back in 2024, all text in NYC, performed large-scale character recognition across every Google Maps Street View image in New York, producing a search engine that lets users find any word captured in the city’s streetscape.
Now, Sean Hardesty Lewis at Cornell Tech has pushed the idea further – using vision-language AI models to build an interactive map that lets users search not just for words, but for virtually anything visible in the city, from objects and materials to styles, atmospheres, and abstract visual patterns.
Searchable City allows you to search New York's Google Maps Street View imagery for just about anything. For example
https://searchable.city/?q=statue
returns Street View images of statues across the city, identifying and mapping each instance detected by the system.
Searchable City isn’t limited to clearly defined objects; it can also surface more diffuse, abstract, or atmospheric qualities. For example,
reveals Street View scenes that feel tucked away or off the beaten track, highlighting quieter, less visible corners of the city.
In doing so, the project hints at a different kind of urban interface – one where the city becomes searchable by meaning as well as by location.
