Код робота vector

Обновлено: 05.07.2024

List of everything you can currently do with the Anki Vector robot

Compile into one list from the Vector app

Say "Hey Vector" to get his attention.

Alternatively you can press the button on his back instead

Hey Vector, hello

Hey Vector, good morning

Hey Vector, good afternoon

Hey Vector, good evening

Hey Vector, goodbye

Hey Vector, my name is Luke

Hey Vector, what's my name?

Hey Vector, Who am I?

Hey Vector, whats the weather like?

Hey Vector, whats the weather in London?

Hey Vector, set a timer for 5 minutes?

Hey Vector, check the timer

Hey Vector, cancel the timer

Hey Vector, what time is it?

Hey Vector, take a picture

Hey Vector, take a picture of me

Hey Vector, take a picture of us

Hey Vector, come here

Hey Vector, look at me

Hey Vector, start exploring

Hey Vector, stop exploring

Hey Vector, be quiet

Hey Vector, go to sleep

Hey Vector, good night

Hey Vector, good robot

Hey Vector, bad robot

Hey Vector, go to your charger

Hey Vector, give me a fist bumb

Hey Vector, find your cube

Hey Vector, do a wheelstand

Hey Vector, roll your cube

Hey Vector, listen to the music

Hey Vector, play blackjack or Hey Vector, play a game?

Hey Vector, quit blackjack

Hey Vector, fire works or Hey Vector, Celebrate

Hey Vector, Happy New Year

Hey Vector, Happy Birthday or Hey Vector, How old are you?

Hey Vector, I have a question --wait-- who is Jarvis?

Hey Vector, I have a question --wait-- what is the distance between New York and London?

Hey Vector, I have a question --wait-- what is the tallest building?

Hey Vector, I have a question --wait-- what is the definition of artificial intelligence?


Table of contents

Anki Vector - The Home Robot With Interactive AI Technology.

Well, I bought this little guy at 10 Feb 2019, if you want a robot pet, and you want to do some AI programming on it, then I highly recommend you to get Anki Vector.

I build this project to share my codes and docs.

This program is to enable Vector to detect objects with its camera, and tell us what it found.

We take a photo from Vector's camera, then post to Google Vision Service, then Google Vision Service returns the object detection result, finally, we turn all the label text into a sentence and send to Vector so that Vector can say it out loud.

Here are some demo videos:

  1. Vector detected a watch on my desk.
  1. Vector detected a bear on my phone’s album.

image2

  1. Vector detected a game controller on my desk.

image3

  1. Vector was exploring on my desk as usual and randomly tell me what it found.

image4

Well, let's see how to do it.

Run the code yourself

  1. Install Vector Python SDK. You can test the SDK by running any of the example from anki/vector-python-sdk/examples/tutorials/
  2. Set up your Google Vision account. Then follow the Quickstart to test the API.
  3. Clone this project to local. It requires Python 3.6+.
  4. Don forget to set Google Vision environment variable GOOGLE_APPLICATION_CREDENTIALS to the file path of the JSON file that contains your service account key. e.g. export GOOGLE_APPLICATION_CREDENTIALS="/Workspace/Vector-vision-62d48ad8da6e.json"
  5. Make sure your computer and Vector in the same WiFi network. Then run python3 object_detection.py .
  6. If you are lucky, Vector will start the first object detection, it will say "My lord, I found something interesting. Give me 5 seconds."
  1. Connect to Vector with enable_camera_feed=True , because we need the anki_vector.camera API.
  1. We'll need to show what Vector see on its screen.

and close the camera after the detection.

  1. We'll save take a photo from Vector's camera and save it later to send to Google Vision.
  1. We post the image to Google Vision and parse the result as a text for Vector.
  1. Then we send the text to Vector and make it say the result.
  1. Finally, we put all the steps together.
  1. We want Vector randomly active the detection action, so we wait for a random time (about 30 seconds to 5 minutes) for the next detection.
  1. When Vector success to active the detection action, you should see logs:

You can find the latest photo that Vector uses to detention in resources/latest.jpg .

This program is to enable Vector to place shoes for us. Vector will place our shoes when we're not at home, so we can leave home without worry about the shoes, especially when we're in a hurry.

This program is in research. I'll share the plan, the design, the docs, the codes here. I highly recommend you make an issue on GitHub so we can talk about it further if you're interesting, any help is welcome!

Here is a draft demo video I made to give you guys a sense of the program:

Читайте также: