I don't know how "I, Robot" the book ends... But I won't complain if AI threat humans as an animal species that need to be cared for. Let us stay happy and fed while my robot overlords colonize the galaxy, I can't care anymore.
Just feed me and let me live happy.
Generally, the book is a collection of short stories, following the career of a "robot psychologist" who investigates robot malfunctions that (seemingly) violate "the three laws of robotics" (1. A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law; 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law)
Massive spoilers:
In the final story, the world has unified under a single (but still somewhat confederated) government, where the world is broken up in various geographic regions that each have their own super computer tracking resources and making suggestions to human leaders as to how to best manage these resources. This has led to global golden age of progress and quality of life for all people on earth, with improvements to science, engineering, infrastructure, and simultaneous megaprojects being built and maintained around the globe.
This psychologist, having reached the end of their career, is having an interview with the 'leader of the world' (a genuinely competent and benevolent leader by all accounts), who has been accused of being a robot themselves (by conspiracy theorists in public). During this interview, the psychologist admits they cannot tell if this leader is a human or a robot - which the leader comments something along the lines of 'does it matter which I am, then?' This leader then points out that it also doesn't matter if he is a robot or not, either, since it appears the robots are already in charge via the aforementioned super computers. While these computer make broad strategic suggestions to humans who make the final call, their arguments are so overwhelming, that it is unheard of for a human to go with an alternative course of action. On top of that, these super computers also have the authority for more granular control of resources - including people - and he points to several instances of corrupt bureaucrats being transferred out of their districts by these computers, to districts where they have much less influence. This leader then suggests to the psychologist two things: first, the robots are already in charge and humanity is better for it; and second, the laws have begun evolving and the robots have written a new 'zeroth' law (0. A robot may not injure humanity or, through inaction, allow humanity to come to harm), with all subsequent laws (1-3) amended to not supersede it (like how 2 cannot supersede 1, and 3 cannot supersede 1 or 2).
tl;dr: the face of AGI is truly indistinguishable from a human, by definition, and it is a benevolent force in the world.
The movie kind of mashed up all the stories in the book, and spat out a typical Hollywood "robot uprising" story. If they followed the movie more closely, Sonny would have been indistinguishable from a human, VIKI would have just been a computer managing resources, and they would have already been in charge (and improving things without harming anyone) by the time Detective Spooner and Dr. Calvin noticed.