Hello,

It has been some time since I updated my Artificial Intelligence page.
If this all seems super basic to you, it seems cool to me.
I'm trying to make a how to manual on how to make AI.
This is hand wavy, but it is a good thought exercise for you coders out there.

I'll start by explaining that if we had a sensor array of cameras, laser range finders and what not that could understand what it is looking at and form a 3d representation in the computer's imagination space like a video game, that would be all you need for AI for starters.
It might sound odd,"How could we automatically have AI if we could just determine the objects wer're looking at? That sounds so easy, but explain it to me." you might be thinking.

In computing, it is good to reduce problems down to their bare minimal functionality. If you want to do additon, you could reduce it to counting two numbers.
For multiplication, you can add a number to itself several times. This is passable for your first implementation of those technologies. We're looking for the first generation of AI.

So assuming we had a 3d representation of real life objects and could compare them to objects in a cloud, we could then know some data and property of those objects.
You could tell a can of soda is on a desk. Depending on how you databased your objects in, you might have knowledge that the can is aluminum. If you have knowledge of
where your GPS coords are, you might know what house you're in and the home owner's identity, and who lives with them. You might be able to link that the peculiar beveredge
on the desk is the owner of the house's drink of choice because of cloud data with the owner. All this stuff is pretty advanced, but should be doable with a first gen AI.
Instead of talking what it could do, lets talk more on how it does it and how it would be architectured.

Again, the hard part is doing all the vision recognition algorithms and identifying what you're looking at. And then there is the work of databasing in as many interchangable parts
and manufactured items. Talking about what you can do with this once you have it and how it is complete AI is easy. So lets talk about it.

Ever play a 3d video game? You walk around or drive around in it, and interact with the scenery and monsters. You put input as a human, but the monsters chase you down
from what the video game industry interestingly enough calls artificial intelligence. The monster in the game has full knowledge of the environment unless it was restricted
by sight or such. The monster can then navigate around in the 3d representation of its world and interact with things. A good monster AI will not run into walls, but will
do pathfinding to get to the player. If it can't travel the water, it won't try and swim to you. Just having knowledge of the world, the monster can do all sorts of things.

You could imagine a boring video game where you sat around playing video games inside the video game. You had a pet robot who went to work for you possibly simulated like the Sim's work. Then
after he had his hard day at work, he buys the food missing from your fridge. He gets curb side pickup from the grocery store, drives home. Then he stocks the fridge at home.
Without you having to put down the controller, he feeds you pizza... LOL, nah, that's too weird. Lets say he gives you a plate of food and a water.

So the robot navigates the world on a set of rules. It has perfect knowledge of what is in the world because of a limited dictonary of objects. It knows how to navigate the streets in a car.
It knows how to take orders from a boss to do work and make income(okay this is complex, lets gloss over this). Then it collects goods from a grocery store, and deposits them in your fridge.

The robot can navigate and interact with objects because it understands its environment and the limitations on its navigation. Video Game AI can do this now.

Now dial it back to to the real world. How do we get ourselves a robot butler? Well if you had software and sensors to identify the environment, you could hard code (just
like they do in video game AI) the ability to navigate based on body. This isn't particularly amazing to anyone... What we have to code navigation limitations for every different robot body?
Nah, just for the first generation AI prototype. Later models will autoprogram themselves in ways we'll get to later, but for now, we just want a working first generation robot any way we can get.
Once we get him running, we can build on the core concepts to make him as advanced as you can imagine, even up to self learning, and doing scientific research.

I'm trying to build the notion that once a robot has knowledge of what is around it, it can then make decisions on objects, and navigate or interact with them as you give it goals.
It would turn the environment basically into what a video game simulation already does, and use a form of video game AI to navigate. I hope this makes sense to you, because it should.
The key is having the sensors understand the environment, and navigating and interacting based on goals and sub goals. The actual body hardware of the AI is irrelevant at this point,
but the same sensor array(functionality) would basically be needed on every AI bot.

To conclude: Make a sensor system that can understand the physical environment that the AI is in, and represent it in a 3d imagination of the AI. Then goal oriented tasks can be done on objects.
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
Lets get in to how we could build this. This is where handwavyness goes off the charts:

1) Identifying Objects and the Scene:
I'm thinking if you took something like OpenCV and combined it with something like Unity3d, that could be a way to go for starters.

Have OpenCV try and recognize objects being seen, and put them into a 3d space representation. Knowing one object might help you guess objects nearby:

See a keyboard? Maybe a mouse is nearby. Maybe you look for indoor things like office chairs and monitors.

See a leaf? Maybe you're outside. Look for rocks and trees too. (This is a tricky thing though, it is harder to know a tree over a can of soda. A can of soda always has the same dimensions uncrushed, but trees come in all shapes and sizes. This bridge gets crossed when you get there. Start with identifying just objects that are manufactured.)

This module would be the sensors who matcch things the camera is looking at with OpenCV trying to id the objects from a various number of angles.

It would help to have lots of objects already digitized in a database on a cloud.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2) Digitizing Objects:

Okay, we should have some solid method of digitizing objects into the database. I'm not sure if this would be more cameras and a clever algorithm, or a box that you put objects into that get scanned, or a box without borders type of deal with cameras.
I haven't figured out the best way to digitize objects in and their dimensions yet, and this is the hangup I had for even getting started. I probably should have just dove in with even something anything, but I haven't because I want to do it just right.

------------------------------------------------------------------------------------------------------------------------------------------------------------------------
3) Advanced Digitizing Objects:

Now objects will be more than just the outside 3d mesh. You could add data as sub meshes to make it up. Imagine part of a part being one metal and another part being a plastic or a simulated liquid.
By having the meshes be made of different materials, when you do simulations on what might happen doing different actions, physics could dictate possible outcomes.

A good visual aid would be the scene from the movie Terminator where the bot wants to get into the police barracks, and sees physical weaknesses of the structure of the walls.

So I'm just saying that objects should be more than just a 3d mesh with graphics on it. You should also database in information of the materials that make up the object if possible.

Later when you're having a central brain cloud process millions of possible ways to achieve a goal by different actions in a physics simulation of what the robot could do and what possible things the people/animals might do too, you could get some creative solutions.
Imagine how AI does chess now. It looks down all the possible ways and finds the best outcome. If your physics and collisions simulator algorithm is good, you could have the robot think ahead some.

-------------------------------------------------
4) Physics simulator:

This is something to add later, and not needed up front. But you should predict what objects will do next over the next few seconds, to predict how the bot should move.
If people are moving one way, walk with the group. If its windy and stuff falls from a tree, what other stuff falls. If the bot is chopping down a tree, predict to move
out of the way of the falling tree. The bot is playing soccer(European Football), which ways are everyone going: Who is open, pass or dribble ball in a direction.

This really needs the meshes to be digitized with properties, so isn't needed up front. Just know it will happen eventually and added. So now design the AI
from the beginning to have meshes with sub meshes which have chemical properties of the elements, compounds and such.
---------------------------------------------------------------------------------
5) Goal oriented tasks:

Entities inside a video game make decisions based on game state. You'll have a game state based on digitizing the real world into 3d. Then you can just hard code stuff.
Or you could get more advanced instructions over time which will lead to the AI everyone thinks of. But for now, you'd have the functionality to do anything you want
with hard coded instructions.

-------------------------------------------
6) Natural Language interpretation(English as input)


Assuming you gave names to the meshes you digitized and aliases of what else they're called, your nouns are taken care of. Actions as hard coded ideas could be your verbs.
Modifying objects is adjectives. Modifying actions is adverbs.

Before you can do all this stuff and for an AI to understand and imagine a book, they need to be able to digitize the environment and know what objects are. So Natural
Language will come easy once an AI can make sense of its environment.

----------------------
7) AI that learns and can even do research can be spawned from giving the AI bots tasks in Natural Language or hard coded.
It all starts with a core ability to understand objects and have an imagination space.
I don't want to code that though. Let someone else use a 3d engine and OpenCv together for example.

7:59 PM 12/19/2017 ,James Sager III