FiddlerBot Progress

 Mark-World  Home Robotics  CreationsPicMapToolBetter Mouse TrapJetPuck  Droid AppAstroSpotterPicIo Hobby  IOGuitar  Effect/Dsp
About Mark-World

FiddlerBot Details And Progress


Welcome to the FiddlerBot progress page of Mark-World where you can track in detail the progress of FiddlerBot, a ROS based bot of fair complexity

Robotics has been my main focus on Mark-Toy projects of 2014.  
I hope that you enjoy these creations and perhaps are even inspired to create your own robots or devices with moving parts for fun!

BEWARE!   This page shows the details of FiddlerBot so stick to the main robotics page for just a higher level summary.



FiddlerBot
(Known as RosBo prior to mid Nov 2014)

This is the semi-detailed progress log in reverse time order of FiddlerBot which has a summary at  FiddlerBot High Level Description


We will first discuss genearl architecture and then detailed progress in a reverse chronological order of FiddlerBot creation progress.


FiddlerBot has a variable speed and direction movement ability as well as a camera mounted with pannable camera angle for viewing from side to side.
FiddlerBot makes use of a camera and image recognition combined with a full remote control and monitoring web interface based on RosBridge and JavaScript support now intact.    A picture of the multi functioned web interface as it shows up on the Chrome browser is below.   Because FiddlerBot now works fully disconnected using WiFi and WebBrowser interface I can control this bot with my PC with hardline or laptop over wireless or my Android OS tablet.   The extremely  'open'   capability of the web browser combined with WiFi on the bot make for some exciting plans for remote presense and manipulation of objects as I move forward.

FiddlerBot can accept a list of tasks to carry out and will then operate on it's own to atonomously carry out tasks.  The tasks it can do now are things like find and grab an object,   Move to some other location identifed with an 'AR Tag' and then drop that object at that place.

Fiddlerbot has motion, a grabbing claw (this his name) and even a 'Laser'.

FiddlerBots current JavaScript web interface complete with video of what he is seeing and status of his many sensors can be seen as well as his thoughts on how he is doing with his current goal he is doing and where the objects he is seeking is located (distance and angle from screen center).

Web Interface with live video and control of Bot

Web JavaScript enabled interface


FiddlerBot sports a brain with hardware being the BeagleBone Black board running Ubuntu 14.04.1 linux. A feature rich set of functional subsystems run on this multi-node system implemented as many 'nodes' communicating via messages all with the help of  the ROS (Robot Operating System) using the current version called indigo.    My custom hardware sits on my custom surface mount IO board and takes great advantage of the I2C serial bus for many chips.  The IO board  includes a totally separate 'Internet Of Things' enabled SparkCore processor piggyback on the custom IO board that has a Wi-Fi  internet interface with security through the SparkCore and associated Spark servers on the web.   I can control this robot's basic movements using radio hand controller or my Android Phone to the Spark for simple movements OR using the Chrome web browser and a feature rich JavaScript web interface  seen above.

The software is composed of many ROS 'Nodes' (processes) for isolation and ability to re-use these nodes.  Each node is in general either for inputs and sensors or for output to control some physical hardware and a 'main brain' node gets input and then controls the output hardware nodes.   The nodes communicate over ROS Topics (messaging channels/queues) so I could use these nodes in future projects or swap in other hardware with little to no impact to the overall main control software.  Highly flexible software architecture is the key to quick development, efficient reuse, and fault isolation.  All the software is on my own GitHub private  repository.

A custom self-designed 'Claw-Arm' was developed that allows grab/release and raise/lower all to be done with ONE stepper motor!  This arm is something I'm pleased with as I feel it may be a unique and simple solution applicable to many electro-mechanical issues.

The Modular Design Approach Is a Beautiful Thing
One of the nice things about the modularity of the ROS operating system nodes, messages and other features of ROS is that in this case the entire wheel control module, RF input, Wi-Fi input node or Display node or collision nodes could be modified in this or some other bot later but the rest of the robot brains would be directly usable for future bots of mine.  Modularity and main processor cpu load offloading is key to re-use as well as suportability and future enhancements.  As a robot designer you gotta love having the flexibility of a modular set of subsystems.


SFiddlerBot Software Architecture Overview


There is a key node called the 'main_brain' node which gets inputs from message queues (ROS topics) fed from the input nodes and sends display output to the display_output ROS node as well as sends messages to the hardware control nodes to drive wheels, the claw-arm, camera pan/tilt, and 'laser' as required.  The display is then driven by the display_output node using I2C so we have a display spooler to drive the display.

Most hardware communications utilizes I2C interfaces as well as some serial and standard servo control signals.

A piggy-back custom mostly surface mount hardware proto board ties to a BeagleBone Black ARM V7 processor running Ubuntu 14.04.1 (3.14 kernel) and ROS Indigo toolkit.   In terms of any re-use of my software which is currently on my own private GitHub repository is that users would have to sort out hardware to connect an assortment of mostly I2C connected devices (from AdaFruit, SparkFun or many of my own connections to raw chips) to use my software nodes as they stand.   The software is architected to attempt to isolate hardware interfaces in functions so reuse is possible with hacking as well.  I had all this on Raspberry Pi B till end of 2014 where I had to move to BeagleBone Black to be able to get usage of RosBridge and a JavaScript web based robot control panel as it was too painful on RaspPi.

As a side note, I'm  having a lot of fun using a QA100 USB 100Msps logic analyzer from QuantAsylum as it has built in I2C, RS232 and SPI interpriters that I plug in on top of the Raspberry PI IO connector seen in the picture of just my IO board for when I want to do hardware debug on this robot.




One-Line summaries Of the ROS Nodes:
   - 'display_output' node connects to the LCD display seen in the picture. 
   - 'navigation_basic' node reads from the AdaFruit LSM3030LHC  I2C board for 3-D magnetic field and accelerometer data.
   - 'rf api' node listens to the tiny RF receiver board from a hacked small RC car RF receiver for driving instructions
   - 'serial api' input node received RS-232 commands to a Wi-Fi connectd Spark-Core so we are on the  'Internet Of Things'
   -
'wheel_control' ROS node receives commands from 'main_brain' node to drive the motors using MAX5822 dual DAC
   - 'collision detect' ROS node monitors the IR sensors using a PCF8574 digital input expander on I2C  for table edge detect

   - 'hardware monitor' node readsthe Stc3115 fuel Gauge chip on a custom board to get battery level as well as ambient temperature.
   - 'arm Control' node runs a unique 'propriatary' crab arm and the camera Pan servo.   This unique single servo arm does 'grab'/'lift' action.
   - 'servo control' ROS service controls the Arm and the camera pan servo.  This was added to isolate servo PWM implementation.
    -
'object detection' node monitors a color object detection board called 'Pixy' version 5. (I may convert to OpenCV as well)

Below you will find more detailed descriptions of the ROS nodes but still at a readable level of detail.  Many nodes were used for assorted sensors because part of my goal is to isolate specific lower level hardware using my own custom ROS drivers I have developed so that these subsystems can be easily used in other projects.   There are messages that offer high level monitoring of sensors to or from the main_brain node that can isolate the specifics of hardware implementation


Because this is a multi-node (process) ROS implementation and most of the hardware is on I2C bus I use system V semaphores to have cross-process lock on the I2C bus (required for common hardware shared across processes).   This is fully supported on the Arm V7 architecture but means that the ROS code must be started as root  (sudo su) in order to have semaphore ability.

I have the C++ software all on my private GitHub repository and once I have a suportable system my plan is to share these nodes in a public GitHub repository so that others who wish to use the same hardware components can leverage their efforts from my 'base implementations'.   My base implementations will be specific to my hardware but I try to isolate specifics with use of header files so that others may leverage from these nodes with minimal effort.  It is assumed that exposure to ROS is a prerequisite to re-use of any of these nodes.  The reason I use many nodes is in fact driven by my desire to generate re-usable or customizable ROS nodes for my later projects and perhaps others to use as a base.  Isolation of hardware specifics from system level software is a real beautiful thing and I use it in a big way.



Details on The Individual ROS Nodes

A software bubble diagram showing the ros nodes and all processes can be inspected in a new window to follow the text below.  The diagram shows the ros nodes as bubbles with their names and the ROS topics are labeled along each line that represents generally the message (topic) communications. Hardware is generally shown in rectangles with a hex value showing I2C addresses when used.  Keep in mind it is only me so to see this rather busy diagram that is only in pencil then feel free to open this link

The  'main_brain' node gets inputs from message queues fed from the input nodes and sends display output to the display_output ROS node as well as sends messages to the hardware control nodes to drive wheels and the arm as required.  The display is then driven by the display_output node using I2C so we have a display spooler to drive the display.  This node uses data structures that are built up from a template approach that allows easy set/read and query of if anything has changed.   At the lowest level a template is used to form a state variable of a given type and then classes are then formed and instantiated for subsystems such things as the motor, hardware or other states.  These subsystem state classes are then inherited by the higher level BotState structure so that to move to multiple bots or store/save things would be clean.

The Web GUI is a javascript enabled control/monitoring 'front panel' and is made possible through the use of RosBridge and mini-httpd server with associated JavaScript.  This interface is a bit busy as it offers much diag info besides the full control and monitoring of sensor feedback.  We can tell which tags the bot sees and see the distance and angle from center of video to the current object.  The Interface also allows tell the bot to do things with a given AR tag number with this busy interface.   It is shown on this page in a recent incantation.

The 'display_output' node connects to a Modronix LCD3S LCD display in I2C mode from SparkFun seen in the picture.   The main_brain node mostly talks to this but any node could talk to this if I so desire in the future as the ROS topic could be published by other nodes but of course one would then have to be careful to allocate specific display areas for any given module to present data on the display.  Messages allow full or partial updates of the display as well as utility features like clearing or setting brightness.

The 'navigation_basic' node reads from the AdaFruit
LSM3030LHC  I2C board that contains the   3-D magnetic field and 3D accelerometer data..  The node puslishes all it reads or rather 3 axis data from each of the magnetic and accelerometer parts of the board.   The node does adjust the 'axis' for the physical position I have used for the board so that X is forward-back, Y is side to side and Z is up-down.   If your board is in some other orientation of course it is an easy hack to swap what is reported back to 'main_brain' node.     I leave open the possibility to add GPS and altimeter all of which would be best suited for this module.

An 'rf api' node listens to the tiny RF receiver board from a hacked small RC car RF receiver and is able to send 6 different codes to the main brain.   This node of course could use some other RF input later and the messages would still work so only this node need be altered.  To use this node one would either have to duplicate my hack on the simple RF receivers used in most tiny HO scale radio cars that often cost under $10  OR hack the software lowest level hardware poll to read other 'keyboard style' inputs from some other source.

A 'serial api' input node received RS-232 commands and values and passes them to the main brain node.   As of mid Nov 2014 this input is from the SparkCore Wi-Fi interface.   This allows Wi-Fi to hit the SparkCore first via android or other web app and not have to tax the RaspberryPi with the large overhead of a web-safe Wi-Fi piece of software.   We greatly offload the main RaspPi cpu in doing this AND we have our system on the 'Internet Of Things' with a cost effective and tiny package.  I frankly don't use this much and am leaning now to the use of RosBridge and a javascript web interface but for what it's worth I have this still.

The 'wheel_control' ROS node  receives messages mostly from the 'main_brain' node and knows how to drive the motors for the wheels independently so I can go forward at different speeds, turn right, left, or rotate in either direction or go backwards.  Commands from the RF control and/or from the Wi-Fi interface go to the main_brain node in high level sort of way like 'Forward at speed 4' and decisions are made there to forward on wheel_control.  Wheel control sees the high level movement commands and converts that to the DAC values and control bits required by the custom wheel motors and direction control.  The hardware supported is to a MAX5822 dual DAC that then drives my own power transistors.  So to support this using a PWM motor driver would mean converting the lowest level software to drive PWM interface.

A 'collision detect' ROS node monitors the IR sensors that use the novel Hamamatsu 6986 modulated IR sensors on my mini-IO boards that plug into this bot.  This node also reads in the option switches.  Collisions or the edge of a table can be detected so I can avoid or stop the robot.  Other forms of collision would be added to this node as required later.  This node reads from a PCF8574 digital input expander on I2C  to simply read in some of the bits and mask them off for right and left (and perhaps soon a 'rear') detector.  I do have other future plans for other forms of collision detect such as 'cats wiskers' or micro-switch physical bump detectors and so the interface to send messages to main_brain could easily be expanded as needs arise.


A 'hardware monitor' node was added in early Feb 2015 which reads from the Stc3115 fuel Gauge chip on a custom board to get battery level as well as ambient temperature.   This node could be easilly augmented to read other things like light level and so on so that is longer term goal. I wired my own board (which requires EXTREME care and delicate soldering technique as the 10-bit surface mount chip is 2x3mm so TINY but I did it and it works.


The 'Arm Control' node runs a unique 'propriatary' crab arm.   This arm design is unique in that a single servo controls first the 'grab' action then the raising of the arm to variable height all with just ONE servo.   I believe this is a 'first' and you heard it here first at Mark-World.com   This drives a single PWM port that then drives the servo I use so this node is extremely easy to use BUT my 'arm' mechanism specifics is linked tightly to the driver in this node.   The high level interface to this node is fortunately non-specific to hardware implementation by design so some hacking could adapt this node to other hardware.  The Arm control node controls more than just servos as the arm control node is the 'owner' of the servo controller ROS service which encapsulates the Pololu Servo Controller for the rest of the system to use.

The Object Detection capabilities are moving along really nicely as of late Feb 2015.  Fiddlerbot now leverages  the popular and robust AR Tag tracking ros node called ar_track_alvar from Scott Niekum (Thanks Scott!) which works by inspecting the image from the usb_cam webcam driver and then publishing object tags 'seen' in the frame.  We also  use  a  'Localization  AR Tag'  if placed on the ceiling to get bearing but that is rather new.  The visual abilities use a usb_cam driver and a popular ROS node implementation to do the AR Tag recognition called   ar_track_alvar.

The WebCam Image transmission to the FiddlerBot Gui (seen in image on this page) is done by mjpeg_server code.  .  The webcam can be assorted standard USB webcams that are handled by the usb_cam software available for usage in the ROS environment.

The Full featured JavaScript interface is made possible through the use of the RosBridge suite components that link in ROS topics (messages) to JavaScript library.

The ability to pan and tilt the webcam has been added as well as a ROS service for general servo board control.  This means the ClawArm as well as the camera Pan servo are both dealt with with requests to the Servo controller ROS service now.   Several atonomous modes like search, face or grab object all use the webcam to do their functionality.

I have removed a node I made that monitors a color object detection board called 'Pixy' forms the input to a node that packages up the objects in view and sends them on to the main brain for processing.   In use now is the I2C interface to the Pixy rev 5.    I have found the Pixy board to be extremely tempermental and have been frustrated with the Pixy's in-ability to recognize unless very controled lighting conditions exist which is tricky to supply in a robotic situation.  This has been replaced with other object detection from webcam using AR Tags recognition (black and white blocks).





Project Update Milestones through 16Mar 2015: 
(This section serves as rather simple one-directional 'blog' of recent progress)

As of 4/6/2015 The camera has been updated to use a 2 degree of freedom pan/tilt camera head which allows far greater range of sight from a given bot position. Have upgraded the webcam with one that has a 20% greater field of view.  Have upgraded the headlights for 4 levels of brightness.  The motors have been upgraded to a higher quality set of 10rpm motors that have much less backlash or 'slop' in the gears offering better control.  A BlueTooth module is now in use that allows connectivity to the CPU main console to assist startup and scripts should there be Wi-Fi network issues or startup issues I have console access.

As of 3/26/2015 The bulk of this weeks work has been to clean up code and encapsulate things that were globals into my main BotState which is a multi-layer template based class that I will not discuss here. I had meant to demo this bot March 25, 2015 but had unfortunately forgotten a key cable so that must wait.  I have added a real  'Laser' that I can control with bot queued commands and worked out a few demos like finding 3 different objects and then activating the laser for fun on each of them.  Another sequence of commands I had wanted to show the seeking out a given object, picked it up, then moved it aside and returned to start placement.  These things are all possible with improved algorythums for object finding and grabbing as well as other bug fixes.

As of 3/18/2015 a few key hardware improvements have gone into the bot.   I have repleaced the badly speed regulated motor drive circuit to use the most excellent Drv8830 driver chips which internally provide precise output voltage so they allow setting 1.5V out and you get that plus the motors can move very slow because these I2C based driver chips have internal feedback and do PWM to keep output constant.    The second not as dramatic change is that I'm using a 180 degree camera pan servo version instead of 120 degree.   Also last week I implemented a 'localization run' where the bot camera pans far left to a mirror that then shows the ceiling  where I place an AR tag.   This allows accurate direction info and not so accurate displacement from below the AR tag .  This could be cleaner of course with a second up-facing cam but I'm concerned on cpu power to do that just yet.  I also updated this website to have a nice blue outline graphic of an IC up close.

As of 3/16/2015 we have restructured the software so that a queue holds sequential set of tasks that FiddlerBot should carry out.   The tasks generally assume that FiddlerBot can find the object for the action (an AR Tag numeric ID) which is also a very new ability.   FiddlerBot can be told to grab a given object and then in following task told to move to a given other location (AR Tag recognized numeric ID) and then drop that object there.    So we have a key benchmark at this time in that we can tell FiddlerBot to carry out a set of instructions and FiddlerBot can follow the directions in a fully atonomous manor.

As of 3/4/2015 we now have FiddlerBot on a full WiFi interface and the JavaScript GUI seen in a picture on this page.  This was first shown to HBRC (robotics club) on Feb 25 2015 when WiFi was first figured out for my beaglebone black processor.  The operator sees what the webcam sees and can control the webcam pan as well as wheels and the claw arm all from the gui.   This is a major milestone and a lot of fun.

As of 2/18/2015 we now have working webcam on it's own pan servo all on the BeagleBone Black and no longer RaspPi.   Now we have a full web interface where javascript is served from FidderBot to Chrome browser.  We can control motors and camera pan from web browser.  The web browser also shows with 1 sec or so updates the current view of the bot as well as most of the important monitors so we have a remote-control interface with vision at this time.  Kernel device tree changes now enable two more serial ports that are bi-directional.  The enabling of PWM ports was given up for now in favor of usage of a new, separate Pololu 8-channel PWM board which at this time runs the pre-existing servo for my custom arm/claw and now also runs the webcam pan servo just added.   The debug serial console uses a back-mounted right angle connector so I can see this all the time using a 3.3V capable pin-compatible adapter to my pc (very handy). 

As of 2/8/2015 we now have a 'hardware_monitor' node which at this time reads battery voltage and ambient temperature from a custom board using the Stc3115 chip.  (this was a VERY tricky soldering job to this TINY IC).

As of 1/29/2015 we have
integrated RosBridge and mini-httpd server with associated JavaScript control panel allow now as of Feb 2015 web control of FiddlerBot.   This interface is in current dev but it's 'proto' form shows assorted FiddlerBot sensor data in a semi-live updated form and is soon to show video from a webcam being integrated into FiddlerBot now (mid Feb 2015).   The idea is to 'see' what fiddlerbot sees and then control his movements including his claw in a remote way.  This change was extreme in that it required moving the CPU to be a BeagleBone Black with an adapter board and assorted but fairly mild software changes.  This was a HUGE effort as it required moving from Raspberry Pi B to Beaglebone Black using both a custom home-brew adapter board as well as full system re-build on top of Ubuntu 14.04.1 and Ros Indigo now.   In short this massive change was required due to incompatibility of RosBridge for my RapsPi environement but this also gets us significantly more CPU power.  I was also hoping for many PWM ports but sadly the 3.14 kernel is short of 'clean' PWM support so I will use external board running on an extra Uart of the BBBlack that I have got to run now.

As of 12/27/2014  Have replaced drive motors with nicer lower gear ratio versions.  Don't have a picture up yet but it looks very similar, ust different camera board. Have upgraded his camera mount so that I now am mounting the Pi RaspiCam camera on him but can easily swap back in the the Pixy color recognition camera in a few minutes.    Because I am having significant grief in getting reliable color band recognition I decided to put in a camera and then have that camera show its view on my android phone over wireless.  The idea will be the android program I have now will be beefed up so I can see what is going on maybe at 5 frames per second and control it with the phone to move about and grab things.  It could still sense the table edge and so on but full atonomous operation will have to wait till I can make firmware for Pixy using GCC and that is 1st quarter 2015 or so I suspect.

MotoBot was shown to the Home Brew Robotics Club at it's Sept meeting in Mountain View CA and was I believe well thought of at that meeting.  (more or less).


As of 11/15/2014  Quite a bit has been done and we are at hardware complete and ROS Node architecture complete.
-
The Pixy camera module has been moved over to work on I2C bus as USB was just too complex and cpu intensive plus plugging in Pixy would crash RaspPi.  So now I can leave I2C disconnected or pixy off and things will run then plug in as I like and it just works.
- The Pixy camera system reporting nicely to main node now so that I have implemented object tracking.  This means RosBo can recognize a 3-color pattern on an object and know where it is then home in on it and sit in front of it.  This is finally recognition with action.
- The 'claw-arm'  has been fully integrated now with an 'arm control' ROS node that includes driver code to drive it's servo using the PWM circuit on RaspBerry Pi.   The Arm was tested for best layout and I am using a micro-servo to actuate it to nicely close and raise objects.

As of 10/5/2014
This time is fuzzy but in short the 'navigation_basic' node was added to monitor and report sensor data from the AdaFruid LSM3030 board with 3D magnetic and 3D accelerometer was added.  Communication over I2C.

As of 9/30/2014  (Most of last month's time was spent on significant MotoBot improvements)
- The Crane that will grab, lift and lower/release objects has been undergoing a few iterations in the motor/long arm mechanics to optimize it for operation using lower amounts of pull required by the motor.
- A set of 8 headlights have been installed to light the colored objects in front of the robot.  This enhancement is required because the Pixy camera is very sensitive to lighting and has proven to be a very touchy system to get to recognize things as easily as it's developers advertise it's cleaver recognition abilities.  I believe I will have to have a very controlled setup avoiding any other colors besides the objects themselves to get the system to work reliably.  We shall see.

As of 8/28/2014
- A
Crane that will grab, lift and lower/release objects was developed today.  One motor does both the grab and lift functions. Nifty!

As of 8/22/2014
- A
1st cut Android application that can talk over the cloud and the robot's Wi-Fi enabled SparkCore processor to control this bot now works
 

As of 8/12/2014
- We have a Pixy color sensitive object recognition board in the front that updates shared memory with the blocks that have been deteced (size and location).  A 'pixy_objects' node then reads the updated shared memory, using a semaphore lock, and passes object location information to the 'main_brain' ROS node.   At this time I am not acting on this information but will of course do that next. 
- The bot can boot up and come online without any internet connection so this bot is now able to act on it's own.
- We also have a 5600mAh battery mounted under the bot so it is totally self powered and should run for over 2 hours all on it's own.

As of 8/4/2014
- A wheel_control ROS node receives motor control commands like  'set right wheel to speed 6'
- The basic bot is now on a platform (shown in pic) and is fully ready to trek around on it's own as a next step.
- Added the 'display_output' node to drive a Modtronix LCD3S LCD module from
SparkFun  that is driven over the I2C bus by the display_out ROS node
- A mini IR proximity modules I build are shown in the front are are today readable by the 'collision_detect' ROS node so the bot can act on detection.

- Multiple I2C devices are controlled on one bus from different ROS nodes now so I use system V semaphores (safe I2C bus locking).

As of 7/2014
- We have the SparkCore sending internally generated speed control commands over RS-232 serial to a serial_api ROS node on the RaspberryPi. 
- The serial_api node publishes commands to both the 'main_brain' ROS nodes over a ROS topic (outbound message mechanism).  
- A 'collision_detect' node that will read sensors and update the 'main_brain' by publishing a ROS topic which at this time is sending dummy updates. 




Closeup Picture Of My Custom IO board

Robot3ControlUnit



BeagleBone Black to RaspPi board with PWM and Bottom of IO board

Io board details




I Hope you have been entertained by viewing the mini-robots from Mark-World!