AI is a huge field. Do I need to say that again? To give you any specific idea of the size of the topic, I will say that much of what people thought they knew has been reworked a number of times. And modern AI approaches don't necessarily have much to do with the old notions. And in fact it is well outside the scope of my post today.
If you have questions, or would like me to expand it it. I will need to know which part of AI you're interested in. And we can go forward from there.
Today I will just be giving you a general overview, and if you want to continue your study I would recommend you get Ben Coppin's book Artificial Intelligence Illuminated. From what I remember about the book though, it doesn't look into some common machine learning techniques which is where the modern shift is going. These techniques focus on pattern recognition and function approximation.
It's a good book for getting your feet wet with the ideas of "Big Spaces" and searching, but you'll probably have to look elsewhere in a more complicated text for those ideas (I will suggest one in a moment).
The reason that I suggest it is because it is a good starting point and it offers a nice high-level overview of the field while not getting bogged down in details that are irrelevant for somebody just starting to learn about the subject.
As it is Ben Coppin's book offers a good starting point, but once you're ready for something more substantial you could pick up the book AI: A Modern Approach. You should be warned before opening, it is for all intensive purposes filled with graduate-level text.
Most people don't like to read about theory, well, I guess I can't blame them. But hands on is really only sensible once you have a basic understanding (at the very least) of what the subject entails. Only then is it easier to go forward.
The nice thing about AI is that you can dabble in just about any language.
So as far as languages go, at the starting level it doesn't matter. For some reason people always suggest Matlad. And I can imagine that there are lots of resources out there for Matlab because of this popularity, but, I'm not a big fan of that language, therefore I will not point you to anything specific. If you have knowledge of, or are willing to learn Lisp, there are excellent resources for AI programming, like Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp.
Boiling It Down To the Basics
You can't go anywhere without understanding how to get their. And to get a better grasp on AI it is extremely important to learn linear algebra.
Linear algebra and maybe differential calculus are a strict minimum for whoever wants to do anything even vaguely scientific.
But let me get back to the point, if you know basic computer science, AI can be pretty accessible.
You're right that AI is a huge field, so it can be difficult to get into.
My advice is to find something you like and go ahead and try it. Nothing will be lost if you decide that you do not enjoy it. You will have gained some experience and that is what is important. Start by trying to code up the algorithms you have been exposed to and play around with them.
Try extending the models that are featured in the books that I have suggested and expand on them with ideas of your own and then compare the performance.
One aspect that I foud extremely engaging was one that you may wish to explore are the philosophical ideas in AI, especially if GAI is what you'd really like to focus on.
Neural networks are not the only kind of AI that you will encounter. In the old fashion sense, there are at least three kinds of AI. These are: Symbolism, Connectionism and Behaviorism.
Cognitive psychology and neural biology are gigantic fields, but they're also not with their introductory texts if you have interested in exploring them on a superficial level.
I won't go into them, but there are unconventional computing ideas that may influence the field, particularly biological and biologically-inspired computing. This aspect of classification may be obsolete because modern algorithms have become more encompassing.
Symbolism comes from the oldest formal inference system called logic, Connectionism is come from biological neural networks, and Behaviorism is actually a methodology of psychology. I don't want to be verbose, check these terms by google or something else.
I personally wrote an AI program in Haskell, and am a Haskell missionary. And to be honest, Haskell itself is worth learning, if only for the advanced type system that it exposes.
But as I said. For the basics, even the simplest languages can help get you started.