The #1 technology that needs to be made before we have AI is computer vision to be able to digitize the world into a 3d representation. Once we have that, the flood gates are open for artificial intelligence. Because I'm not smart with computer vision, I won't be working with Artificial Intelligence until that milestone in human achievement is accomplished. This page explains in detail how to create artificial intelligence if you had access to computer vision technology.

* Another look at AI.
*My actual progress log in developing the true AI (conclusion: We need a camera to 3d digitizer).

It may sound crazy to some, but I'm positive I know all the steps on how to code up a simulated human onto computer. The kind of AI everyone dreams of where the computer can learn from what you say, the kind of AI that can be applied to make sci-fi androids. Yes, I know how to do it. It will probably take me 20-50 years on my own. Hopefully some I get in an organization along the way and speed up the process. Or like anything else, I probably come up with the idea, and someone else develops it. It's all good though, since it benefits humanity.

So you want the steps for making true AI that can simulate a human, and learn from various inputs such as sound, vision, reading books or communicating with a human.

The main part you need to know is modeling a 3d imagination world(kinda like quake). The computer uses the 3dimagination to guess what will happen next given choices. It can conceptualize what might happen given different events(basically the same algorithm for game playing/chess). And if it wants to accomplish something it has to complete goals and subgoals. If the computer has an imagination, and some way to take sensory input to form a perception of the world in its mind, then it can then make choices and interact with the world.
Lets reduce what a human is:
1) Body
2) Senses
3) Mind

1 Body- To build AI, you don't need a body first. The structure of the human is pretty much the same across different people. Some people are taller, some shorter, but over all when you imagine a person you get a general image in your mind. You could imagine the body of a primitive AI to have nothing but a monitor for output. Humans use their intelligence to learn their motor skills and to assess their limitations in sports. AI would work in much the same way. When you later add a body to the AI, you could use the power of the AI to teach it how to use its new body.

2 Senses- This is how the AI takes input... In the most generic sense, you could use text... But a real human doesn't gain conscieceness via a text input... And even if one did, it would be hard to make sense... What a human gains conscienceness from a baby with generally are 5 basic senses: sight/sound/touch/smell/taste. With less than the 5 senses, a person can be considered disabled... Hellen Keller, an extreme example, could not learn until she felt... To gain sanity, to understand your world, you need to relate real world experiences with cause and effect relationships... You see a ball, you see it bounce... But then you see it doesn't bounce as well someplace else... you then begin to understand that one surface is harder and the other is soft... And once you get enough information to make sense of your world, you can understand words when your parents start teaching you nouns... You see a tree, and one of your parents points to it and says,"Tree".. Oh, thats what a tree is. You see the tree, and then you hear the word tree. You now have a basis to build knowledge... In the same way, the computer AI could be built in this way. If you had a system to translate what you see from a camera into a virtual 3d world(daunting task, but being developed on many fronts today)... You could label items, and teach the computer in the old manner of hardcoding... If you coded well enough, the computer could recognize objects(like they do facial recognition now), and then imagine them in its virtual world. Other senses could be added to more easily recognize the objects in reality then place it into the computers imagination, but simply understanding sight/sound is good enough for a start.

3 Mind- What is the brain of the human. The brain is what makes all the decisions and understands what is going on. Assuming you had senses that could build an imaginary virtual world in your computer's mind... You could then use the computer to expand on the virtual world... For example, if it only saw part of a tree... It could then use its brain to expand the tree out of the field of view... The computer's mind should be able to understand what someone says in context as well. IE,"You're looking at the wrong part of the tree, the top half is cut off." The computer could then conceptualize what the person means, cut off the top part of the tree, then move its camera to make sure it was right... Right now you may be thinking,"How did the AI suddenly gain the power to understand what people said." I'm just trying to describe how the mind of the computer AI should work when its in place. One more thing the mind should be able to do is take information from people or books on objects and actions... Some children may have never seen a zebra in person, but have been told about them(horses with stripes) and seen pictures. The child has no reason to disbelieve you as what most of what you say is truthful... Eventually the child goes to a zoo and sees these creatures for real. If you tell a child that a unicorn is a real animal, they'll have no reason to disbelieve you either. Which breaks into another fundamental concept of trust... The AI should trust the main coder 100%, then assign different values of trust on different people it meets depending on different factors... I won't get into specifics, but if one person is prone to lying, it should disbelieve that person less. About the final thing needed is the ability to accomplish a goal. The brain of the AI thing should have many upon many ways of accomplishing goals, and subgoals along the way.
3b) In downtime, the AI's mind should work much the same way when human's dream. The AI should piece together subgoals to try and form main goals. The AI should also run over its event list of the previous day and see if it interpreted events correctly. There's analogies that could be had between human dreaming and computer dreaming that could be taken to psychology here.

So from the above, we know the body can be ignored.
The only sense we really need to start with is vision.
This isn't the only way this can be done, but from what I've looked over this is the easiest to explain and understand.

Step by step:
1- Design a 3d world where the basic laws of physics apply.
2- Use a nice object oriented language to build a way of representing objects in this world. Every object will have an indefinite list of variables to them.
3- Use the language to build a way of representing verbs/actions. Basically it would be a rule of pseudo physics. You don't need to know the laws of thermodynamics to know that heat transfers. This would be the representation that the AI thinks a certain action works on different actions.
4- Define a few of the objects... Like you could define a ball, then you could define a baseball,basketball,super bounce ball, etc... Then define like a floor, then concrete wall, wooden floor, grass floor(actually outdoors, but like I said complex)...
5- Define a few actions/verbs... Like: if ball is bounced, then the force its released + the force it gains from falling applied to its own bounciness and the floors hardness is how high it bounces... Then say if too much force is applied, the ball may even break... The straight forward way of thinking is that the action/verb list holds initially what is expected to occur, and then the object/noun list holds coefficients to modify them.
Think throw.
Think throwing a rock.
Think throwing a paper airplane.
Think throwing something like a piece of paper.
You think the same action is the throw, but you think that how far the object is going to go is based on the object. Then there is a whole class of objects that you can't lift so you don't care about... And then there are objects you've never thrown before, but can probably guess how well they'd fly
1 6- Design a camera input system that takes a shot of reality, and the computer interprets it into different known objects that are hard coded into its database.... Then the computer represents the world in its brain.

If the computer doesn't recognize something, it would make a request to the coder for a hardcoded explaination... An explaination could be taken too if the coder specifies that the computer misrepresented an object.....

Now you could keep hardcoding all this information and develop yourself a killer case of carpal tunnel, or you could make something more userfriendly... Maybe you did something like this back when you were defining objects and copy/pasting information for some objects and changing only key vaules

Maybe you want to start thinking of making this computer actually take things in context and understand you....

You could do this easier coding via keyboard, but lets do the more alluring value and say we have a speech recognition program now... Other people coded these, I would never in a million years make one from scratch.
So you say,"Computer, the ball that you think is a baseball, is in actuality a tennis ball. Tennis balls are green while baseballs are white with red stripes."
Assuming the computer knew everything you said, it would understand that tennis balls are different than baseballs in color at least. But how did it understand English so fast... Yes I skipped ahead... Lets just suffice it to say that teaching a computer English that has a 3d perception of the world is just a pain but straight forward. Really in the most basic sense, you need only teach it the structure of the language, then if it comes across words it doesn't know it can say,"I don't know this word." Remember playing Zork? Remember it not knowing half of what you typed in, so you had to read the rules to understand what words it knew? At the most basic level, imagine playing Zork where it directly asked you what words meant when it didn't understand... "I don't know what a Tennis Ball is. What is it?" It would then prompt you. Then using the words it knew, you gave it a basic understanding of the new word. At first this will be a tedius task as you try to explain it you will use many words it doesn't know... And you'd have to define them all using the limited subset of vocabulary hard coded... But over time and revisions, the game could then build a great vocabulary, and later when it learns further into words, it can apply that... You may see that once this AI is programmed in English, it'd be trivial to translate to other languages... In addition, you could use it to translate languages from one to the other because it would even understand context which is one of the big problems now doing computerized language translation.

So you have a computer that can see, hear, imagine, learn and communicate... Right now, you're pretty much done... If it is given a body, taught cause and effect, and how to observe for success/failure in these then it could play ball, learn how to navigate, to make baskets, etc. It would be all based on a series of main goals, based on subgoals. The more times its practiced, the better it gets because it refines the subgoals.
There are infinite uses for this technology.
Email me, I'm bored

<------- My picture, from a shoddy web cam.