This AI-powered camera uses location data instead of optics to create pictures

Shawn Knight

Posts: 15,303   +193
Staff member
Forward-looking: The AI revolution is still in its infancy but is already inspiring products and ideas that nobody had even thought of until now. Case in point is Paragraphica, a quirky camera that uses location data and artificial intelligence instead of traditional optics to generate images.

Paragraphica is the brainchild of Bjørn Karmann. The first thing you'll notice is it doesn't have a lens because it doesn't need to see the scene. Instead, the camera uses open APIs to collect data about its location including the address, nearby places, the time of day, and even the weather. Gathered data is used to craft a descriptive paragraph of the scene, which can be tweaked using three dials on top of the camera.

The left dial is similar to the focal length of an optical camera but instead controls how near or far the camera searches for data to include. The middle dial is a noise seed for the AI image diffusion process. Sticking with the traditional camera analogy, the third dial adjusts the aperture – how sharp or blurry the resulting image is. In this case, it controls how tight or loose the AI follows the paragraph to create the final image.

Paragraphica exists both as a physical prototype and a virtual camera that you can take for a spin online. The physical version is based on a Raspberry Pi 4, a touchscreen display, a 3D printed housing, and other custom electronics.

Karmann used Noodl to build the web app that sits between the camera and the various APIs to gather location data. Python was used to write code, and Stable Diffusion handles image creation. Karmann said the photos never look exactly like where he is, but they do capture some of the moods and emotions in an uncanny way.

How to Run Stable Diffusion on Your PC to Generate AI Images

Traffic to Karmann's site is through the roof right now, so you may have to check back later to try the virtual shooter.

Permalink to story.

 
This looks like a total fake, because they take images at daylight, and they end up with the identical lumination. Location data does not have anything related to light, so it is faked somehow.
 
This looks like a total fake, because they take images at daylight, and they end up with the identical lumination. Location data does not have anything related to light, so it is faked somehow.
Check the captions used to generate the images; time of day and weather conditions are also included. The time isn't collected via location data (it's obviously freely available on the Internet), but weather conditions also help determine lighting. Also the lighting isn't identical, you can see way less defined shadows in the AI generated images.
 
Gee, using "google earth" street view, or something similar to make a fake photo.
THIS is not photography.
When people actually start to realize that everything about "AI" (AI - :laughing:) is fake, I bet the current AI will go the way of Clippy and Cortana - both died slow deaths until M$ realized that people don't want crap. It appears that it is going to take M$ and other "AI" or as I have started to put it AHI (a-hole intelligence) purveyors a long time to learn that you can't keep producing the same crap over and over and expect different results.

If I can't trust AI to give me valid factual results, which I cannot as other TS users, inparticular @yRaz have said, it will give you completely made up answers, I have less than no use for it. In Bing, I've taken to using uBlock origin's element hiding helper to eliminate it from my Bing experience. Bing and any search engine results are bad enough, I don't need AHI to make the answers worse. AHI is lipstick on a pig.
 
When people actually start to realize that everything about "AI" (AI - :laughing:) is fake, I bet the current AI will go the way of Clippy and Cortana - both died slow deaths until M$ realized that people don't want crap. It appears that it is going to take M$ and other "AI" or as I have started to put it AHI (a-hole intelligence) purveyors a long time to learn that you can't keep producing the same crap over and over and expect different results.

If I can't trust AI to give me valid factual results, which I cannot as other TS users, inparticular @yRaz have said, it will give you completely made up answers, I have less than no use for it. In Bing, I've taken to using uBlock origin's element hiding helper to eliminate it from my Bing experience. Bing and any search engine results are bad enough, I don't need AHI to make the answers worse. AHI is lipstick on a pig.
So I've been doing A LOT of work with AI recently and am even working on making my own. I also have epilpsy so I've been learning about how the brain works my entire life, although I wont pretend to be a neurologist. The brain is not the 'CPU' of the body as it's often been thought of and we're still struggling with that idea around 40 years after it was proposed. The Brain does not have an operating system or instruction sets but it does a very good job of looking for patterns and making connections with things it has learned in the past, IE, large sets of data. Is this starting to sound familiar?

The main argument I see around why AI isn't real is that we know the mechanisms behind how it does what it does. That actually isn't true, most people describe AI creating content as "hallucinating" and then you get your results from that hallucination. We know a lot more about how AI works than the brain but don't let that confuse you, we don't actually know how either one works. So what I take away from peoples arguments saying AI doesn't work like a real brain is that there is so much uncertainty in how the brain works relative to how AI works that we say "AI isn't real". It's a VERY weak argument. One that I feel maybe entirely false.

The thing is, humans are also really good at making up stories. We have buildings filled with them, they're called Libraries. And talking about books brings me to an interesting point. What happens in your head when you read a book? Many people who read a lot, myself include, experience a phenomenon where they stop seeing the words and start vividly creating something akin to a movie in their head. Is this starting to sound familiar?

Humans make stuff up all the time, often times unintentionally. Maybe I misheard you, maybe I heard you correctly but the "speech to text" center of my brain interpreted this wrong so I give a wrong response. If I have learned anything from my work with AI it is that it is important to make sure your inputs have as little room for interpretation as possible. And that's just a good communication rule in general. When I start a project with ChatGPT I talk down to it about the subject that I'm working on until I'm confident it is familiar with the subject I'm working on.

The thing is, to get AI to appropriately do it's job and lower its error rate you have to treat it differently than a google search. People want to Ask AI a question like a google search but that it's how to get the best answers out of it. You need to know how to set limits and give proper instructions but doing that is more abstract than writing some code when interacting with it.

Lowering the error rate with answers from AI is very similar to lowering the error rate when interacting with humans. Humans make mistakes constantly, far more than we're willing to admit to ourselves. We make stuff up and hallucinate constantly whether we realize it or not. And as with everything, we need to check its work in the same way that I would need to check someone elses work if I have them perform a task for me. There is an idea that AI should be infallible but it is that very nature that makes it such a strong assistant.

I find it annoying that people think we should intentionally make it worse so it doesn't take peoples jobs, that is entirely the wrong approach. If someones job can be easily replaced by these early AI's then how good are they really at that job? Are we to ignore these tools and impair them because we're worried about our jobs. How important are "jobs?" We seem to think that fast food workers aren't important so we replace them with kiosks and robots, but those are just tools to streamline and make a job more efficent. There is an idea that somehow going to college and becoming educated makes that job more important, it doesn't. This is inherent to the ideas of capitalism and I'm glad people are finally talking about it because the whole idea that "your job isn't important because X, Y and Z."

Our economy is going to start being filled with AI. High level jobs are going to be replaced by AI and the world will be better for it. We do not need to stifle innovation to save someone's "job," what we need to do is re-evaluate how our economy works. Capitalism is not needed in an AI dominated economy and I find it infinitely ironic that capitalists are the ones pushing AI research right now. And just a reminder, no ones job is important and in the eyes of a company, everyone is replacable.
 
How important are "jobs?"
I was with you for much of your post. It was very interesting to me. But I'm having trouble with this question. Jobs are absolutely critical in our world with its strong preference for capitalism over socialism. There have been serious arguments from certain groups about reducing or eliminating what little social safety net we have.

Could we restructure our economies and our world so we don't let unemployed people go homeless and starve? Sure. Are we willing to make those changes? And could we reach consensus? Hell no. The United States House of Reps could barely agree on a measure to keep the government running.

I think there's always been this idealistic notion about what should happen when technology (or offshoring) takes away jobs that are at the bottom of the hierarchy. Displaced employees simply need to retrain, get some education and get a better job. Everyone is happy! Right? That really hasn't worked for everyone, and now the cuts we're talking about almost certainly include positions such as paralegals, attorneys, software engineers, etc. It will take a while, but if unrestricted, the cuts will be deep.
 
I was with you for much of your post. It was very interesting to me. But I'm having trouble with this question. Jobs are absolutely critical in our world with its strong preference for capitalism over socialism. There have been serious arguments from certain groups about reducing or eliminating what little social safety net we have.

Could we restructure our economies and our world so we don't let unemployed people go homeless and starve? Sure. Are we willing to make those changes? And could we reach consensus? Hell no. The United States House of Reps could barely agree on a measure to keep the government running.

I think there's always been this idealistic notion about what should happen when technology (or offshoring) takes away jobs that are at the bottom of the hierarchy. Displaced employees simply need to retrain, get some education and get a better job. Everyone is happy! Right? That really hasn't worked for everyone, and now the cuts we're talking about almost certainly include positions such as paralegals, attorneys, software engineers, etc. It will take a while, but if unrestricted, the cuts will be deep.
It's one thing when robots start flipping burgers, it's another when AI is being used in fusion reactors to sustain the reactions longer. We currently have just artificial intelligence, when we have a true artificial general intelligence it's game over for every person who thinks their job is important. It will be able to do whatever job better and faster than any human. At that point, it doesn't make economic sense to hire humans, so what do we do when we reach that point? Do we implement failed social programs such as companies have to hire so many humans for every AI? Are we going to have Affirmative action based for AI? Are we going to prevent ourselves from ever having fusion energy because your job is too important?

Jobs are only important to economies when goods are dependent on human labor. With AGI, every aspect of production can be automated. Every aspect of a business can be streamlined and costs reduced. The only problem with this is if Apple still wants to charge $1500 for an iPhone when it costs them literal pennies to produce. It will be designed and built by an AGI and it will do it better than any human could.

People do not fully comprehend the economic impact is going to have on the economy. AGI is going to destroy capitalism as we know it and considering we don't live in a true capitalist society I don't think that's a bad thing. I could care less if this thing called capitalism being forced on us disappears. After my work with AI I think we are less than 10 years away from an AGI, it's my opinion we could have it in as soon as 5 years. I hope your house is paid off and your savings is large because your job is going to be replaced. You can either ignore that thinking it'll never happen or we can start to plan for it before unemployment hits 50%.

Currently unemployement is around 3.5%, how high does that number need to go before we need to rethink our economy? 10%? 20%? Frankly, I think the economy has needed reworked since the 70's and the last 50 years has been out of control debt spending. There are only so many "tomorrows" we have to pay it off.

Currently we have 2 parties in office who both have the wrong idea. Increasing the amount of debt isn't going to fix our issues but defaulting on that debt would also crash the economy.

I only have about 15 healthy years left so I'm fine watching what an AGI does to the economy. I'll be fine if AGI takes my job. On an intellectual level I enjoy watching AI progress but we have seen the start of AI already taking mid tier jobs. Enough so that people are already getting worried. The thing is, AI isn't going to get worse with time, it's only going to get better and more powerful.
 
Back