Sure: here is the code for using this capability in the loop functions PostStep():
Code: Select all
auto cb = [&](argos::CControllableEntity* robot) {
robot_post_step(dynamic_cast<argos::CFootBotEntity&>(robot->GetParent()));
caches_recreation_task_counts_collect(&
static_cast<controller::base_controller&>(robot->GetController()));
};
IterateOverControllableEntities(cb);
Where robot_post_step() takes the controller associated with the controllable entity and does the following:
Code: Select all
/*
* Watch the robot interact with its environment after physics have been
* updated and its controller has run.
*/
auto iadaptor =
robot_interactor_adaptor<robot_arena_interactor, interactor_status>(
controller, rtypes::timestep(GetSpace().GetSimulationClock()));
auto status =
boost::apply_visitor(iadaptor,
m_interactor_map->at(controller->type_index()));
/*
* Collect metrics from robot, now that it has finished interacting with the
* environment and no more changes to its state will occur this timestep.
*/
auto madaptor =
robot_metric_extractor_adaptor<depth1_metrics_aggregator>(controller);
boost::apply_visitor(madaptor,
m_metric_extractor_map->at(controller->type_index()));
controller->block_manip_collator()->reset();
Basically, I use the functionality to (1) iterate over the swarm and collect metrics from each robot (what task they are currently executing, current location, current heading, collision avoidance status, etc), and (2) to have them interact with the environment, in terms of sending them events related to block pickup/drop, cache pickup/drop, and (3) get information about the current task each robot is executing for use in determining whether the loop functions should recreate an intermediate drop site between the food source and the nest (a cache) after it has been depleted by the swarm. By using this functionality, I can avoid having to maintain my own thread pool for these iteration operations, which need to happen every timestep, and are very slow to do without threads for large swarms (for small swarms, serial iteration is fine).
I started out using OpenMP to do the swarm iteration, which worked OK. But on a 16 core machine simulating a large swarm (say 16,000 robots), running ARGoS with 16 threads, I used an additional 16 OpenMP threads to do the iteration, which meant that the OS had to switch 32 threads in and out each timestep, which accrued non-negligible overhead, in addition to the overhead of the OpenMP thread scheduling algorithm itself. Maintaining my own pthread pool still accrues the OS context switching overhead (32 threads on a 16 core machine in the above example), but is faster than the OpenMP implementation. Using the thread pool in ARGoS results in 20-25% improved efficiency over the other two implementation options, which for the 24 hour cluster jobs I run is a significant 5-6 hour savings.