Back to contents

 

Coop Control

Damon Sweeney & Joanna Zhang

 

Coop Control Image

Table of Contents

Introduction

Problem Description

Initially we were approached by the father of Kent alumni who was in search of a simplistic technical solution to automating the door of a chicken coop, his intentions were to use this to help guarantee the safety of the coop. To fully understand the problem domain we proceeded to conduct some background research into how other smallholders manage their poultry and concluded that a lot face hardships related to keeping them.

From our research we found that the main problem smallholders face was still this act of ensuring that the chickens were let out in the morning and locked up at night, however we did note that a variety of other issues were faced. The issues described included: knowing if the flock had access to victuals, if eggs were present, ensuring that the air conditions inside the coop were optimal for the chickens' health, among others.

After performing market research into existing products we came to the conclusion that a gap existed for a new system or service that encompassed solutions to many of these problems. As such we have a built a modern technical solution that helps alleviate some of these burdens.

The Solution

As the problems varied between each smallholder that we spoke with we decided it was best to build an extensible modular system, such that the solution could be later tailored to any given end user.

Though we have built this system with expansion in mind, the scope of this project remains specifically built for the original client who approached us to build an automated door, Steve. After speaking with him about the research we had conducted with the other smallholders we came to the conclusion that he held interest in a system which incorporates an automated door, victual monitoring, air quality monitoring, electric fence voltage readings and egg presence detection.

Out of the initial requirements Steve had for the project, we have managed to produce devices to monitor water drinker levels and air conditions, automate the coop door and to detect the presence of eggs. In relation to the monitoring of feed levels, Steve decided that this was not necessary as he preferred to feed them himself, stating "I think it's important you need to interact with them". As for the electric fence voltage readings, we were unable to produce this device due to the health and safety concerns related to working with high voltages (3000v).

The solution we have developed can be divided into two main components, the modular network of devices itself and the modern web application used to interface with it. The modular network is responsible for aggregating the data from the sensors and directly interacting with the coop itself.

 

Modular network

Control Hub

The main device in this modular network is the control hub and its responsibility is to orchestrate the control of the other devices and insert received data into the database. This device provides a REST interface to allow other parts of the system to interact with it. As it stands the hub supports two types of device, those directly connected via CAN bus, and those connected wirelessly over Zigbee2MQTT.

Drinker Level Monitor

The drinker monitor is a simple standalone device produced by Xiaomi. In normal operation it requires a Xiaomi proprietary bridge to see received data, however using a Node library we are able to extract this information and receive it using MQTT. This device reports when water level changes from high to low or vice versa.

Air Condition Monitor

This sensor monitors temperature, humidity, pressure and dust particulate levels inside the coop. This device was custom built using an ATmega328P and off-the-shelf components for detecting conditions. This device is connected to the main hub via the CAN bus.

Egg Identification

This device uses neural net image classification to detect the presence of eggs inside the coop. Due to the processing power required to predict this classification, a client server model has been produced. The device inside the coop captures an image of the nest box, and sends this to an authenticated REST server for processing. The REST server runs the neural net and uses GPU CUDA cores to speed up the processing. Once classification is complete the result is returned to the device in the coop in which proceeds to transmit this result to the main hub via the CAN bus.

Automated Coop Door

The automated coop door can be controlled via three individual methods: automation, physical button presses and via the web application. The construction of this device consists of an ATmega328P, a photodiode and a real time clock (RTC). In automation mode, using these sensors, this device can detect when to open or close the coop door automatically. As for the button presses, when a physical button press is received, the door performs the action and sends the information about the state change to the hub. Alternatively, when a request is sent via the web application the hub acts as proxy and forwards this message to the door. All communication between the hub and the door is performed over the CAN bus.

Domain Specific Language

A language has been created to allow the user more specific control of the network so that they are able to configure the system to their specific needs. This language provides three top level constructs: execute code on a given event (ON), execute code at a specific time of day (AT), or execute code at a fixed time interval (EVERY). As it stands, the actions the user is able to perform include controlling the automated door, sending email and sending SMS messages.

 

Web Application

Control Hub Interface

As a whole, the web application mainly acts as an interface to the main control hub and by proxy the modular network of sensors and devices. The user is able to see the operation of the network by viewing the system logs and interact with it through the means outlined below.

Current Data Display

The main page of the web application displays the current state of the modular network, it shows readings from the automated door, water drinker monitor, air condition monitor and the egg identification module. This data is refreshed automatically every 30s or can be force refreshed by the user.

Historical Data Visualization

This web application is a portal for viewing historical data stored within the database. The data for each sensor is displayed in corresponding graphs, such that the design is dependant on the kind of data being displayed.

Hub Code Editor

The user is able to manage and edit multiple code snippets for the domain specific programmable language of the hub. The user can set an active snippet and maintain a set of alternative, inactive code snippets.

 

Timeline

Timeline of project

 

Authors

Damon and Joanna are two third year Computer Scientists interested in the Internet of Things with a focus on the communication between low level devices. They both started studying at the University of Kent at the start of the academic year in 2015, and both have a year worth of industry experience.

 

My name is Damon Sweeney and I am an undergraduate Computer Scientist at the University of Kent. My career in Computing started whilst I was at sixth form college and since then I have completed two thirds of my degree, conducted research in the security of encrypted traffic of Internet of things devices, and spent a year in industry. Whilst in industry I worked as a developer for a block-chain startup called Arweave where I primarily programmed back-end system code in Erlang. In January of 2018 the company joined Techstars, one of the largest start up accelerator programs worldwide, and as such I spent half a year living in Berlin connecting with the growing tech community.

 

My name is Joanna Zhang and I am a final year student at the University of Kent. I am on the committee for TinkerSoc and I have a years worth experience working in a small sized corporation focusing on Web Development. During this placement I created an Angular application built on Umbraco from scratch alongside creating Node applications to help display customer data. I learnt a wide range of web skills, ranging from learning how to make web applications mobile responsive to writing maintainable back-end code in C#. I am currently aiming to beat Budi, one of my lecturers, at badminton before I finish my degree.

 

Learning Experiences

This project provided many opportunities to learn new skills in areas that our degree does not explicitly cover, as well as reinforce some that it does. Below is a list of the key learning experiences we feel we have had throughout the duration of this project.

 

Building and debugging electronics

Though not directly Computer Science it was really interesting to gain insight into electronics and how to piece together a system from components. This is an area our degree does not cover in depth and we believe for this reason a lot of people are hesitant to pursue these types of projects.

 

Interfacing with low level hardware

Covered within the Internet of Things module, this project allowed us to continue to interact with low level hardware using C/C++. It's interesting to program in an environment where as a programmer you house a much greater list of concerns.

 

Building a Control Area Network (CAN)

Learning how to build a network using a robust communication protocol designed for cars, CAN bus. Setting up this network was one of the more difficult/time consuming tasks of the project, this was due to personal lack of experience as well as the sparse documentation available for interfacing a CAN module with a SOC.

 

Role based authorisation using JWT

Learning how to communicate between client and server with role based authorisation using JSON web tokens (JWT).

 

Designing and implementating a primitive language

Designing and implementing a simple language. This ranged from definining the syntax, that is the set of valid tokens, and the semantics, the rules for valid sentences within the language.

 

Performing image classification with AI

Using a pretrained neural net to perform image classification on a given set of classifiers.

 

Creating relevant data visualisations

Creating relevant data visualisations using modern web graphing libraries.

 

Acknowledgements

Daniel Knox

Thanks to Daniel Knox for providing support and excellent week to week discussion to keep us on track. The advice given at the end of first term really helped us put the project into perspective and push us towards achieving our goals.

 

Sally Fincher

Thanks to Sally Fincher for discussing the initial research and helping to frame the scope of the project. In hindsight we are glad we took this advice on-board as we may have been a little too ambitious in the early stages.

 

Keith Greenhow

A big thanks to Keith Greenhow for helping with the physical construction of all things pertaining to the project as well as coming in to the Shed during his Christmas break. All it cost us was taking him out to lunch!

 

Daniel Andrews

Massive thanks to Daniel Andrews for helping with electronics design and implementation, as well as the continuous encouragement throughout the year.

 

Scott Owens

Thanks to Scott Owens for briefly discussing the intermediate representation of the domain specific language of the system, ChickenLang, the advice given was sound and worked flawlessly.

 

Michael Berry

Thank you to Mike Berry who made himself available at all hours providing a variety of electronic components and helping debug erroneous circuits, he did all this even with a child right around the corner.

 

Elliot Carr

Thank you to Elliot Carr for discussing ideas related to the automated door and providing equipment to get this implemented.

 

Simon Cooksey

Thank you to Simon Cooksey for helping attempt to debug the early demo implementation of the CAN bus network as well as being available for discussion regarding the language implementation. It took far too long to debug that CAN!

 

Drone Shepherd Group

We would to thank Marielle Valdez, Nicholas Bailey and Finlay Shepherd for providing healthy competition and motiviation throughout this project.

 

Project Retrospective

Roles Taken

Having a small development team to see this project through until fruition the line between fixed roles is somewhat blurred as resource was put towards what was required at the time. Though this was the case the next section will briefly outline who took lead on the specific sections of the project.

 

Research stage

At this stage as we were defining the scope and actual problem in which we were tackling, both of us made full effort to be part of this preliminary research. We both visited smallholders within the area and outlined the project together.

 

Product development stage

 

Poster creation

The bulk of the poster design was primarily done by Damon due to his background in graphic design and advertising, however inspiration was obtained from existing work in the school of computing. Iterations were produced with feedback from Joanna Zhang, Daniel Knox and Keith Greenhow.

The content was written in tandem by both Damon and Joanna and critical feedback provided by Daniel Knox. The design was made using Adobe Photoshop.

 

Video creation

Joanna took the helm of creating the demonstrational video due to her past experience of using Sony Vegas video editing software. Damon assisted with the recording of the different segments and helped to story board the content being shown.

 

Technical report author

As this report is a technical overview of the entire system implementation it was written by both Damon and Joanna.

 

Project Fair

At project fair we demonstrated what we had produced with a demo of the prototype hardware, a live feed of the partially implemented automated door at the clients premises, and an informational poster. We were presented with the prize of best at fair awarded by the Computer Science faculty; votes were given to lecturers, special guests and alumni. It was engaging to interact with colleagues and other bystanders who showed interest in our work, and it was inspiring to see the other projects presented.

 

Poster & Display

Coop Control Poster

Display setup

Team Photo

Placards for system devices

 

Discussion & Advice Received

Having a chance to speak to a lot of interesting people at the fair, many had ideas on how if we were to continue with this project it could be improved or potentially commercialized. A few of the major ideas discussed are listed below.

 

 

Future Work

Due to time constraints, and being a two man group, there were a few tasks that we would have liked to develop and explore if we had more time and resources.

 

Low Powered Solution

As our system was tailored to our client, we built a solution that did not take power consumption into consideration as our client had access to mains power within the coop. In contrary, the majority of smallholders we interviewed did not have access to mains power, therefore, research needs to be conducted into low powered solutions which would be run on batteries, if we want to tailor our system to the general smallholder market.

 

Integrate Emails with the Language

Currently, emails entered into the system are only stored in the database, nothing else is done with them. For future work, we would like to integrate the emails in the database with the language. For example, the code can specify a username to send emails to, and an email is then sent to the email addresses contained in the database pertaining to that user.

 

Power Cut Warning

As our solution is running off mains power, if there is a power cut, monitoring data will not be stored in the database, and the door will be unable to open or close. We thought up a solution that would be running on mains power with a backup battery. If the mains power went out, the module will still be able to send a text message to the client, notifying them of the power cut.

 

Sensors

There were a few more sensors we would have liked to implement into our system if time and health and safety allowed.

 

Electric Fence Monitor

The client asked for an electric fence monitor to detect when the voltage dropped below a certain threshold. Unfortunately, as this involved working with high voltages, we were unable to develop or prototype an idea for this solution due to health and safety.

 

Live Camera Feed

A live camera feed could be added to the web interface so the client could double check the status of the door and keep an eye on their chickens remotely.

 

Image Analysis

For future work, the image analysis of eggs can be improved in multiple ways.

 

Count Eggs

During project fair, we were asked if our neural net would be able to count the number of eggs. At that time, our egg analyser was only able to detect the presence of eggs, not the quantity. Work would have to be done to add more classifications for each count of eggs, or another post processing step would need to be added to perform this operation.

 

More Robust Image Analysis Classification

For future work, we would like to train the neural net with more data, allowing it to become more robust. During project fair, we realised our classification recognised foil wrapped chocolate eggs, luckily these are unlikely to be found inside the coop. For the scope of this project, we used approximately 400 training images: 200 containing eggs and 200 without. Before deploying this in the field, we would need to add more training data to ensure only eggs are recognised.

 

Remove Zigbee2mqtt

Due to time constraints, we used a Node.js library named Zigbee2mqtt which allowed us to retrieve data from a Xiaomi Aqara Water sensor and publish it to a MQTT broker. In the future, we would like to have this code directly integrated into the hub. This could be done by either deconstructing the Xiaomi Aqara zigbee packet ourselves, or by building a bespoke, but equivalent sensor.

 

Sensor pairing mechanism

To accommodate the smallholder market, we would need to have multiple devices which can be switched in and out to the needs of the user. To do this, we will need to develop a sensor pairing mechanism which will require changes to our packet structure. A custom discovery protocol will also need to be implemented, alongside changes to the web interface and the database to support this. The language will also require modification, so that the user can specify the specific device.

 

Technical Details

Acronyms & Keywords

Below is a list of the acronyms used within this document, please refer here for clarification.

 

CAN

Control Area Network: A robust vehicle communication standard.

 

MQTT

Message Queuing Telemetry Transport: A publish-subscribe message protocol.

 

GPIO

General Purpose Input/Output: A standard interface used to connect microcontroller to other hardware.

 

SPI

Serial Peripheral Interface: A synchronous serial communication interface bus. Used to send short distance data between devices.

 

RTC

Realtime Clock: A piece of hardwarae that keeps relatively accurate track of current time, often powered via an external battery as to keep time whilst the system is off.

 

SOC

System On a Chip: A small device in which incorporates all the components of a full computer onto a single PCB board.

 

BLE

Bluetooth Low Energy: A low powered variant of Bluetooth technology.

 

IEEE

Institute of Electrical and Electronics Engineers: Organization founded in 1963 comprised of students, working professionals and academics.

 

WiFI

Wireless Fidelity: Technology designed to allow for the 'internet' to communicate without wires, incorporated by the IEEE.

 

RPM

Revolutions Per Minute: Total 'spins' a motor spindle makes within a minute (within the context of the project).

 

STD

Standard Library: A set of functions and classes which are part of the ISO standard for C++.

 

DAC

Digital-Analog Converter: Converts digital signal into analog signal.

 

ADC

Analog-Digital Converter: The reverse of DAC, this converts analog signal to digital signal.

 

IDE

Integrated Development Environment: A piece of software that aids a programmer in software development, providing functionality such as a code editor, alongside build automation tools.

 

EEPROM

Electronically Erasable Programmable Read-Only Memory: A type of non-volatile memory which allows data to be erased reprogrammed. This has a limited number of times it can be rewritten.

 

DC

Direct Current: Provides a constant voltage or current, in contrast to alternating current which periodically reverses direction. Batteries are a good example of a direct current power supply.

 

IP Rating

International Protection Marking: This is a standard which classifies the degree of protection provided against solid particles such as dust and water. The first digit describes the degree of protection against solid particles, and the second against water. For example, IP68 means it is dust resistant and it can be "immersed in 1.5 meters of freshwater for up to 30 minutes".

 

COTS

Commercial off-the-shelf: This is an adjective that describes hardware, or software, that is available to be purchased by the general public.

 

PCB

Printed Circuit Board: These are boards that have connections between electronical components using conductive tracks.

 

UI

User Interface: This is place where interaction occurs between a user and a machine.

 

PLC

Programmable Logic Controller: A solid state computer that monitors inputs and outputs and makes logic based decision dependent on these.

 

DSL

Domain Specific Language: A language built to solve a specific problem or operate within a given domain.

 

CFG

Context-free Grammar: A type of formal grammar that consists of a set of replacement production rules, the grammar describes a specific language via these.

 

RBAC

Role-based access control: This is a way to restrict access to certain parts of a system to specific authorized users.

 

JWT

JSON Web Tokens: A standard for generating access tokens which declare a number of claims which are signed by one party.

 

SQL

Structured Query Language: A domain specific language used for managing data stored in a database.

 

Hard Technologies

Below is an abridged list of main hardware used within our project.

 

Arduino Uno

The Arduino Uno is a simple development board that allows easy interfacing with an ATmega238P microcontroller. Within our project we used Ardunio Uno to develop prototypes for both the automated door and air condition module.

 

Raspberry Pi 3 Model B

Originally intended to be Odroid C2, we opted to use a Raspberry Pi 3 Model B for the main control hub. Both these devices are small single-board computers with General Purpose Input-Output (GPIO) pins that allowed us to interface with other hardware.

 

Raspberry Pi Zero W

We chose to use Pi Zero W for the Water Sensor ZigBee receiver and the image analysis module. These are another single-board computer and we chose this specific device due to the cheap price point, the built in wireless capabilities and the fact we did not require high processing speed (1GHz).

 

H-Bridge driver (L298N)

Used to help control the automated door, a H-Bridge driver was required to switch the polarity of the voltage applied allowing the motor to rotate clockwise or anticlockwise.

 

Stepper Motor driver (a4988)

Initially used in the early iterations of the coop door module, this driver is commonly found in 3D printers and is a favourite among driving stepper motors of various degrees.

 

CAN Interface (MCP2515)

The MCP2515 CAN module was used within our system to interface with our physical CAN. The device itself can communicate across the CAN and speaks to the connected parent device over SPI. The devices that use the MCP2515 are: the automated door, the air condition monitor module, the main control hub and the image analysis module.

 

Air Condition Sensor (BME280)

The BME280 is an environmental sensor that reports temperature, humidity and pressure. It was used for the air condition monitoring module within the system.

 

Sharp Dust Sensor (GP2Y1010AU0F)

This device is an optical dust sensor developed by Sharp, it reports an estimate of the amount of particulates within the air. This sensor was used for the air condition monitoring module within the system.

 

USB ZigBee Receiver (CC2531)

This device is a USB dongle that allows a PC to interface with 802.15.4 compliant devices, this includes ZigBee. Within our system we use this to receive messages from the external water sensor.

 

Xiaomi Aqara Smart Water Sensor

The Aqara Smart Water Sensor is a small wireless device that when in contact with water creates a connection between two pins and transmits a messages over ZigBee. This message states whether water is present or not; we use this device within the system to report when the water drinker level is low.

 

Logic Level Shifter Converter (3.3V - 5V)

This logic level converter is a simple device that either steps up a 3.3v input to a 5v output or vice versa. Within the system this device is used to connect the Raspberry Pi for the main control hub and the egg identification module to an MCP2515 CAN interface. This is required as a middleman between these devices as the Pi GPIO can tolerate up to 3.3v whereas the MCP2515 requires 5v for the transmitter.

 

Generic USB camera (5MP)

This generic USB camera appears on a Unix machine as a video device when connected. It is used within the system as the camera responsible for taking the picture of the coop for the egg identification module to handle.

 

Worm Gear Motor (Hilitand CKTMGB4Q2R-04 40RPM)

A worm gear motor is an electric motor that uses a screw gear to operate. A threaded screw is used and gear placed at a perpendicular angle on top of this, the motor turns the screw and this in turn rotates the gear. Due to sitting on a threaded screw if power is lost the gear itself cannot rotate backwards without the screw rotating. We have used this motor within the automated door.

 

NEMA17 Stepper Motor

Initially used in early iterations for the coop door module, commonly found in 3D printers and sporting a large thread extending from the spindle. Not all models are inclusive of this and various NEMAs exist, for the project we utilised the 17 variant.

 

Realtime Clock Module (DS1307)

Within the system a DS1307 real-time clock (RTC) module has been used to keep accurate time for the automated door.

 

Hall Effect Sensor (KY-003)

A hall effect sensor is used to detect a magnetic field. Within the system this sensor in combination with neodymium magnets have been used to detect the position of the automated door; there is a hall effect sensor at the fully opened position and the fully closed.

 

Monolithic Photodiode (OTP101)

A photodiode is used to detect and report light levels. Within the system this sensor is used to detect this such that the automated door does not open through automation whilst still night time.

 

ESP32 Development Board

A low cost System On a Chip (SOC) used to prototype. Though not used in the produced product, these boards were used to create prototypes testing the feasability of tracking the rough location of chickens.

 

BLE Proximity Beacon (iBeacon compatible)

A simplistic Bluetooth Low Energy (BLE) Beacon that broadcasts a signal. These beacons can be used to detect approximate location and were used when prototyping chicken location tracking.

 

Soft Technologies

Below is an abridged list of the main software used within our project.

 

Python3

Python3 was the main language used to write the main control hub of the system, the corresponding REST web server and the image analysis module. A selection of libraries were used however the major ones are outlined below.

 

Asyncio

One of the core libraries that dictated the architecture of the codebase for the given modules was asyncio, this module allows Python code to be written in asynchronous manner using the standard async/await syntax. Unlike modern languages like Erlang or Scala, Python is not inherently concurrent and as such requires a library to mimic this behavior. Asyncio provides a global event loop in which tasks, referred to as co-routines, can be added and executed. These tasks when being executed take hold of the python global interpreter and yield it upon completion or when a yielding call is made like an await or sleep. If the task yields its execution context is switched to another process for some amount of time.

 

 

AIOHTTP

AIOHTTP was the library used to build the REST web server for the main hub, it is an asynchronous HTTP Client/Server built on top of the asyncio library. Though lightweight AIOHTTP has a host of useful features like middleware implementation and support for standard Python Web Server Gateway Interfaces (WSGI) like Gunicorn. Gunicorn is a top-level master thread that spins up N number of workers to handle incoming web requests.

 

HBMQTT

HBMQTT is an asyncio compliant MQTT client and broker library for python, it was used to interface with the Zigbee2MQTT water reading device.

 

Torch

Torch is an open-source machine learning library that was used for image classification within the system. The vgg16 pretrained neural net was used as a basis for the classifier and the last layers retrained to the specific requirements of the system.

 

Angular7

The web interface uses the JavaScript framework, Angular. Written in TypeScript, this component-based framework provides support for two-way data binding, observables, routing and guards. This framework has access to a wide variety of useful libraries, ranging from Angular Material for UI components to Google Charts.

 

 

 

Angular Material

Angular Material is a library for Material Design components. Used throughout our application, this multifunctional, useful library provides UI elements that range from simple buttons, to the implementation of the sidenav.

 

Google Charts and Chart.js

Both these libraries are native JavaScript charting libraries, however, Open Source developers have provided Angular wrappers around these for easy integrations with Angular applications. Within our system these libraries are used for the visualization of historical data.

 

Node

Node is a modern JavaScript run-time environment that can operate outside of the scope of a browser. This allows us to leverage JavaScript as a server side language and as such we have used it to host the REST server of our web application.

 

Express.js

Express is a framework for Node.js and is used as the backend of our web interface. This fast framework provides support for middleware and database integration, allowing the front end to retrieve live real-time data.

 

JSON Web Tokens

JSON Web Tokens (JWT) are used to authenticate the user. Using a signed secret, this provides a means to securely share JSON between the backend and frontend. This is used to transfer sensitive data such as user role and name. All endpoints on our backend server that contain sensitive information require a valid up to date JWT to provided with each request.

 

PostgreSQL

PostgreSQL was chosen as the Database Management System (DBMS) for the project as it was easily available to use via the Computing departments hosted database server 'penguin'. It should be noted that this could easily be swapped out to any reasonably SQL compliant DBMS.

 

Zigbee2MQTT

ZigBee2MQTT is an open-source software that bridges proprietary ZigBee devices like the Xiaomi range of smart home sensors to a standard MQTT interface. This allows us to use these off-the-shelf sensors within our project without having to build a complex interface to receive and parse their output.

 

Blender

Blender was used to produce the spindle model in which was 3D printed and used on the automated door.

 

Arduino IDE

The Arduino IDE was used to write the C/C++ code present on the automated door and air condition sensor due to its effortless integration with the Arduino hardware, as well as its simple debugging tools like the built-in serial monitor.

 

VS code

Due to the advanced shortcuts, integrated terminal and the wealth of available packages VS code was used as the editor of choice for developing the majority of the project. All languages used had syntax highlighting support and linters available.

 

Software Engineering

Programming Methodology

Paradigms and models

Agile

Agile software development is a model. It is not a methodology, nor a framework or process, but rather a set of values and principles in which a team follow to develop software. The term was spread by the Manifesto for Agile Software Development.

The manifesto claimed that it values:

 

agile manifesto

 

These values mean that processes and tools are noteworthy, however, it is more significant to have capable people working effectively together. Good documentation is also useful in understanding how the software works, yet the point of software development is to produce software, rather than documentation. In the industry, a contract is important, though working with your client as the requirements change is better for both parties. Finally, a good plan can be important, however, it should allow time to accommodate changes to the requirements, technologies or any other unexpected changes.

Owing to the above, the Agile approach follows twelve core principles, which include continuous delivery, simplicity, short iterations and collaboration with the client.

The drawbacks of this development style usually include a lack of predictability as requirements and technologies can change at any time. By embracing the Agile approach, developers will have to accept uncertainty in their methods over following a plan.

 

Waterfall Model

The Waterfall Model is a linear, sequential model and is one of the most traditional software development models. Rather than having an iterative or flexible approach, this model only allows progress in a single direction, and developers must complete one stage before moving onto the next.

The model roughly follows six phases: requirement analysis, system design, implementation, testing, deployment and maintenance. Some variations diverge by adding or removing phases or returning to the design phase if the downstream phases were insufficient.

This model is best used when there are no unclear requirements, and these are well documented and will not be changed in the future, these often involve small projects. The pitfalls are that it is very difficult to change something done in a previous stage and no working software is produced until late in the project timeline.

 

waterfall model

 

Spiral Model

The Spiral Model is a risk-driven complex model that aims to map and reduce the risks of the project early on. The model aims to combine iterative development with aspects of the waterfall model, which means prototypes will be created at each stage and requirements will be refined through each increment of the spiral. Each phase will be reviewed by the client in the end and developers should make decisions based on the level of risk and aim to lower this as much as possible.

There are four main phases to the spiral model, which can vary depending on the project, but most include planning, risk analysis, engineering and evaluation.

 

spiral model

Spiral Model, Boehm 1988. Source: https://commons.wikimedia.org/wiki/File:Spiral_model_(Boehm,_1988).png

 

This method is used when there is a budget constraint or for high risk projects. The advantages are that shifting requirements can be accommodated, and clients are able to see a wide variety of prototypes to feedback to.

 

Methodologies

Scrum Development

SCRUM is a popular agile framework which divides large projects into short two week “sprints”. At the start of the sprint, the team will hold a planning event which will help them agree on the tasks set out for them during the sprint and set up a backlog.

There are three roles that a team member could be when using this methodology.

The product owner embodies the stakeholders and clients. They manage the backlog and are focused on increasing the value of the software. They maintain a dialogue with the stakeholders and therefore focuses on the business aspect of the team. They usually produce user stories to aid the development team in the design of the software.

The development team oversee delivering iterations at the end of every sprint. The size of this team varies on the complexity of the project but is usually three to nine members who focus on design, development and testing.

Finally, we have the scrum master, who oversees the team. This can range from helping the team reach a consensus in the daily scrum, removing obstacles which impedes the team, to helping the team stay focused.

During the sprint, the team will hold daily scrum meetings with specific guidelines. This means that the daily scrum will have to start precisely on time, even if some members are missing. The location and date cannot be changed at all, and the timeframe is limited to fifteen minutes. The team members will have to discuss their progress yesterday and the tasks needed to be done today. They should also point out any obstacle that could stop the team from achieving the sprint goal.

At the end of the sprint, the entire team will review and hold a retrospective of the sprint. This entails reviewing the finished and incomplete work and demoing the completed work to the stakeholders.

This methodology is cost and time effective and allows large projects to be split into manageable sprints, however, it can also lead to “scope creep” as there is no definite completion date.

 

Kanban

Kanban is another agile methodology in which uses a tabularised approach to task management. The entire team share a common board (be it physical or online) of all the tasks required within the project. The tasks are organised under different categories that show their current state; the overarching categories are commonly backlog, active or complete, though each team will adapt this dependent on need, for example ‘In code review’ or other categories may be added.

This approach is much looser than other forms of agile development and has a much smaller overhead. Work is pulled through in single piece flow unlike the specified fixed batches within SCRUM sprints. As work is considered in single file, changes can be made any time and continuous delivery can occur.

Developers ‘pull’ in work when free and ‘push’ tasks through stages, this allows developers to work at their own pace on the tasks they feel comfortable completing. This approach though seeming ideal for adaptation and strong development does have some downsides, task starvation can occur if developers are not required to pop from the backlog in any given order and without common progress checks the timeline for the project can begin to drift.

 

kanban board

 

Basic Kanban task board, Bossarro 2014. Source: https://commons.wikimedia.org/wiki/File:Basic-kanban-board.png

 

Extreme Programming (XP)

Extreme programming is an agile methodology which hinges on the idea of short development cycles with constant deliveries. These methods are used to help aid productivity and allow room for the requirements to adapt between releases. Dependent on the stage of the project the release can be made available to the client to help ensure that their requirements are being met and that none had been missed from the initial specification.

This methodology often relies heavily on pair programming, an agile practice, which has two developers working in tandem to write and review code simultaneously. The developer pair will take turns in acting as the driver, the coder, and the navigator, the reviewer. This pushes code quality, helps keep the system in a clean state and helps motivate the developers.

Unfortunately, extreme programming does have downsides, firstly it is code centric, and it can be argued that the design of the software is largely ignored unless vigorous planning is done. Secondly as the methodology relies on quick iterations and potentially rapidly changing constraints it is not unlikely that initial releases of the software will contain bugs. For the reasons stated above this practice can require a lot of refactoring and therefore has the potential to be largely time consuming.

If the extreme programming methodology is used extra care should be taken to ensure that the system built not only matches the initial and new requirements specified by the client but performs in a stable and reliable manner. This can be assured via a relatively verbose testing regime.

 

extreme programming

 

Extreme Programming, DonWells 2013. Source: https://commons.wikimedia.org/wiki/File:Extreme_Programming.svg

 

Practices

Test-Driven Development (TDD)

Test-driven development is related to agile and extreme programming. This practice means that developers will need to write failing tests before writing production code. They will then need to write the code to pass these tests and repeat the process again.

The benefits of following this practice will make the code more modular. This is because small tests are written each time and code will be written around these. This can also provide better documentation for your code as the tests are constantly running and changing, whereas documentation can be neglected and could go stale. Drawbacks include maintaining the test suite and some tests can be hard to write.

 

Our Methodology

We have decided to approach the solution with an agile methodology, using aspects from Extreme programming and Kanban. As the team is small, with two group members, methodologies such as scrum and waterfall have too much overhead. The waterfall model is catered to an engineering project where the requirements are very strict and only allow for very slight fluctuations in the requirements, whereas the requirements for this project could very well change.

However, due to the size of the project and team, we will not use the pair programming approach found in Extreme Programming, instead we will use code reviews. This will be more time efficient as both members can concentrate efforts on different parts of the project whilst reducing the chance of bugs within our code. From Kanban, we will be using task boards to track work. These will be shared between team members.

This approach will allow the team to embrace the changing requirements of the client, alongside producing rapid iterations and prototypes which means there is a fast proof of concept for the client.

 

code review

 

Version Control

For our project, we have opted to use GitHub to keep track of our source code. We will work in separate branches and only code that has been reviewed is able to be merged into the master branch. Below is a list of our working branches:

 

branches

 

Documentation

We have both agreed to document our code where appropriate. The relevant documentation API should be used pertaining to the language. For example, PyDoc for Python and JSDoc for JavaScript.

 

Testing

Throughout the project, manual and regression testing was conducted to aim to minimize the defects and errors in our code.

Black box testing was conducted on the REST endpoints of the hub, making sure that the value received is correct, alongside only allowing authorized access to information.

Due to time limitations and limited man power, manual testing was conducted on the web interface rather than setting up an automated testing framework such as Nightwatch.js using Selenium.

After we designed the architecture of our system, we created a testing plan of affected components. This plan is then used if a major changed happened to any component.

Testing Plan

Changes made on Hub

 

Air Condition Sensors

 

Door

 

Water Sensor

 

Interpreted Language

 

Database

 

Networking

 

Web Interface

 

Future Work

There are many improvements that can be made to our testing strategy.

 

Unit Tests

Owing to the limitation of the team size, and the client not explicitly specifying a testing scheme, we opted not to go down the unit testing route due to time constraints. Had we had more team members or time, we would have opted to do more thorough testing, which includes using a Unit test framework. The reason this approach would have been so time consuming is due to the fact that the majority of our code base is network code, mocks would have had to be created for each module.

 

Automated Testing

Future work will need to be done to automate testing of the web interface. This could involve using testing frameworks such as Nightwatch.js to achieve this.

 

Animal Welfare

Due to the scope of this project requiring interaction with poultry we have made ourselves aware of the potential legislation pertaining to working with or around animals. As the project involves no direct participation from the animals themselves the only act directly applicable to us is the 'Animal Welfare Act 2006'. The premise of this legislation is to guarantee that all reasonable steps are taken whilst we are in contact with the animals to ensure that their needs are met. In regards to chickens these needs include, access to victuals such us food and water, not being subject to unnecessary distress, and not being put in danger.

 

Minutes

Date: 21/09/2018

Attendees:
What's Been Done since the Previous Meeting:

No previous meeting.

What's Being Done:

Joanna + Damon:

Further Discussion:

Discussed software engineering strategies to use in this project

Briefly discussed the Viva, and how they are going to assess us

Document ALL major design decisions

Will probably use GitHub for version control

Discussed client questions:

Discuss set date to meet every week (Fridays 1-2)

 


Date: 28/09/2018

Attendees:
What's Been Done since the Previous Meeting:

Joanna + Damon:

What's Being Done:

Joanna + Damon:

Further Discussion:

 


Date 02/10/2018

Attendees:
What's Been Done since the Previous Meeting:

Joanna + Damon:

What's Being Done:

Joanna + Damon:

Damon:

Joanna:

Further Discussion:

Client is a hobbyist but needs vary dependent on the location, people need sudo bespoke solutions hence a modular system might be the best route to go.

Talked about:

 


Date 12/10/2018 ; 12:00 - 13:00

Attendees:
Discussion:

 


Date 12/10/2018 ; 13:00 - 14:00

Attendees:
What's Been Done since the Previous Meeting:

Joanna:

Damon:

What's Being Done:

Joanna + Damon:

Further Discussion:

 


Date 20/10/2018

Attendees:
Discussion:

 


Date 26/10/2018

Attendees:
What's Been Done since the Previous Meeting:

Joanna + Damon:

Joanna:

What's Being Done:

Joanna + Damon:

Joanna:

Damon:

Further Discussion:

Date 02/11/2018

Attendees:
What's Been Done since the Previous Meeting:

Damon + Joanna:

Damon:

Joanna:

What's Being Done:

Damon + Joanna:

Further Discussion:

Discussed the project end goals.

Discussed CONOPS stakeholder document: who are the people who express an interest in this system and what their interests are.

Overall constraints of the system:

Responsibilities - who maintains this thing and what parts of these things. Is it all on Dan's parents or is there continued support?

Functional Requirements:

CONOPS: Who is responsible for the different bits?

These are the features we are going to implement. Next stage is prototyping small bits on their own. Prototype opening doors and mechanisms. Regardless of software and communication. Prototype all different bits, then we know if this and that works.

Testing out and evaluating different solutions that meet functional requirements.

Functional requirement -

Door:

Then, we want to:

Communication Systems:

We define the scope of the product. Dan decides what is technical and is good for CO600. If we are doing more research, we need to ask ourselves why we are doing that research and what would it contribute. We need to have a clear idea of what we're building and prototype these ideas to build it. Then focus on integrating it into the system.

 


Date 04/11/2019 ; Emergency Meeting

Attendees:

Emergency Meeting

Discussion:

Date 09/11/2018

Attendees:
What's Been Done since the Previous Meeting:

Joanna + Damon:

What's Being Done:

Joanna + Damon:

Further Discussion:

 


Date 15/11/2018

Attendees:
Discussion:

 


Date 16/11/2018

Attendees:
What's Been Done since the Previous Meeting:

Joanna:

Damon:

What's Being Done:

Damon:

Joanna:

Further Discussion:

Date 18/11/2018

Attendees:
Discussion:

 


Date 22/11/2018

Attendees:
Discussion:

 


Date 23/11/2018

Attendees:
What's Been Done since the Previous Meeting:

Joanna + Damon:

Joanna:

Damon:

What's Being Done:

Joanna:

Damon:

Further Discussion:

 


Date 29/11/2018

Attendees:
Discussion:

Date 30/11/2018

Attendees:
What's Been Done since the Previous Meeting:

Joanna + Damon:

What's Being Done:

Joanna:

Damon:

Further Discussion:

 


Date 07/12/2018 ; Cancelled

Attendees:

N/A

Reason:

Date 12/12/2018

Attendees:
Discussion:

 


Date 13/12/2018

Attendees:
What's Been Done since the Previous Meeting:

Damon:

Joanna + Damon:

What's Being Done:

Joanna:

Damon:

Further Discussion:

 


Date 19/12/2018

Attendees:
Discussion:

 


Date 23/12/2018 ; Over Instant Messenger

Attendees:
Discussion:

Joanna:

Damon + Joanna:

Damon:

 


Date 12/01/2019

Attendees:
Discussion:

Damon:

Joanna:


Date 18/01/2019

Attendees:
What's Been Done since the Previous Meeting:

Joanna:

Damon:

What's Being Done:

Joanna + Damon:

Joanna:

Damon:

Further Discussion:

Date 25/01/2019 ; Cancelled

Reason:

Damon is ill.

 


Date 31/01/2019

Attendees:
Discussion:

 


Date 01/02/2019

Attendees:
What's Been Done since the Previous Meeting:

Damon:

Joanna:

What's Being Done:

Joanna:

Damon:

Further Discussion:

 


Date 07/02/2019

Attendees:
Discussion:

Date 08/02/2019

Attendees:
What's Been Done since the Previous Meeting:

Joanna:

Damon:

What's Being Done:

Damon:

Joanna:

Further Discussion:

Poster:

 


Date 15/02/2019 ; Cancelled

Reason:

Joanna's car needed to be MOT'd.

Discussion:

We let Dan Knox know our progress over Instant Messenger.

Joanna:

Damon:

 


Date 21/02/2019

Attendees:
Discussion:

Date 22/02/2019

Attendees:
What's Been Done since the Previous Meeting:

Joanna + Damon:

Joanna:

Damon:

What's Being Done:

Damon:

Joanna:

Further Discussion:

N/A


Date 17/11/2019

Attendees:
Discussion:

 


Date 01/03/2019

Attendees:
What's Been Done since the Previous Meeting:

Damon:

Joanna:

What's Being Done:

Joanna + Damon:

Joanna:

Further Discussion:

N/A

 


Date 08/03/2019

Attendees:
What's Been Done since the Previous Meeting:

Damon + Joanna:

What's Being Done:

Damon + Joanna:

Further Discussion:

 


Date 13/03/2019

Attendees:
Discussion:

Damon + Joanna:

 

Field Research

Client Research

We were initially approached by the father of a Kent alumni, Steve, who wanted to automate his chicken coop door. We arranged an interview with Steve to gain more insight into his situation. After conducting an interview with him, we also arranged interviews with other smallholders to gain an insight into how they manage their chickens.

 

Initial Client Interview

We conducted an interview on our client's premises in Surrey to gain better insight into their situation.

Steve Transcript

Steve Location

 

Images

After moving out into the garden to see their chickens, they mentioned they suffer from rats stealing food and badgers have been around in the past. They're thinking of breeding chickens in the arcs and incubating the eggs.

mother_hen2

 

nest_box

 

arc

 

main_chicken_coop

 

main_coop_door

 

main_coop_door_inside

 

mother hen

 

mother_hen_door

 

 

Conclusion

From this interview, we gained a better insight into the daily life of our client. Steve explained that it is a rush in the morning to feed the animals, ranging from dogs to horses to the chickens. Steve mentioned that automating the control of the door would help him - "It's one less thing to worry about". However, when approached with the idea of automating feed, Steve seemed reluctant, stating "it's important you need to interact with [the chickens]".

 

Smallholder Interviews

To gain a further understanding in how other smallholders manage their chickens, we conducted interviews with 4 other smallholders.

 

Sue Coombe-Tennant

Sally, wife of Steve, kindly arranged an interview for us to visit her friend Sue who also owned chickens. The transcripts and location can be found below:

Coombe Tennant Transcript

Coombe Tennant Location

 

Conclusion

For Sue, one of the major issues with owning chickens is the fox. She mentions that they come at any point during the day and will kill the whole flock. She has defences in place such as an electric fence and makes sure they are shut in at night. She finds it difficult to be away for long periods of time, such as going on holiday, as there has to be someone there to look after them.

 

Sue Moorhouse

Sally, also kindly arranged an interview for us to visit her friend, also named Sue and owns chickens.

Moorhouse Transcript

Moorhouse Location

 

After we finished recording for our transcript, Sue mentioned she also suffers from rats, and that they steal the food off the chickens. She tries to set rat traps but is concerned if a chicken or dog gets caught in the trap instead. She has seen badgers around her farm, but so far they haven't attacked her poultry.

 

Images

door to moorhouse chicken coop

zoomed in door to coop

chickens taking shelter

pano shot of the chicken area

 

Conclusion

Sue Moorhouse owns both ducks and chickens, and they would both fall prey to foxes if managed incorrectly. They would prefer to have their poultry roam around the fields but incidents have happened in the past where the fox has killed their flock. They have opted to use an electric fence to deter foxes which is doing the job as long as they remember to charge the battery powering it. Another interesting issue they have is with crows. They're one of their biggest issues as they will "come take the eggs in the house when the barn door's open". Sue is now using a smaller cut out door instead of the full sized door and finds it a chore to shut it up at night when the weather is miserable.

 

Suzie Sparrowhawk

Dan Andrews, son of Steve, helped us arrange an interview with his cousin's parents who are smallholders of two chickens.

Sparrowhawk Transcript

Sparrowhawk Location

Images

outer coop door

red mites

 

Conclusion

As per the above, Suzie has issues with foxes as well as red mites. She mentioned that the fox was able to nose the lock open and destroy her flock in the past. Because of these past incidents, she now has built an new enclosure which is completely contained. Suzie mentioned that she woke up in the middle of the night and thought she did not shut the chickens up. She got dressed and went in the garden just to find them already shut up. She stated that "it would have been nice to have known that the chickens were in and safe without having to get out of bed."

 

Will Threlfall

Will Threlfall, a friend from University who recently completed his Year in Computing, owned three Rhode Island red chickens. Him and his parents didn't mind us coming around to have a chat with them about their chickens, however they preferred not to be recorded. As such, we only have images and notes for this visit.

Threlfall Location

Notes

 

Images

image of outer run

inner coop

water drinkers

small chicken perch

chicken carrier

 

Liam Richards

Liam is a friend met through the Andrews' family who owns a flock of ducks. This interview was done over the phone as Liam was visiting his parents on the Isle of Wight.

Conclusion

From our phone call, Liam mentioned he had 10 ducks and used to have a few chickens though they died of old age recently. Unlike his previous chickens, he has to lure the ducks back to the coop with feed and shut them up. Though they all follow each other, the process is quite tedious, especially in the winter. He has found that the water sometimes freezes during the winter and so he has to constantly refill the drinker. He also utilizes an electric fence to deter foxes and mains power can be found inside the coop.

 

Christine Sparrowhawk

The sister of the client's wife, Sally Andrews, owned chickens and did not mind having a discussion with us over the phone about her flock. Christine lives in Cheddar, Somerset.

Conclusion

Christine owns around 8 chickens which free roam in the secured run to keep the foxes out. Their main chicken coop solution utilizes a COTS solution to automate the door - ChickenGuard. She has mentioned that it has been reliable, however, there was a time where it failed due to the batteries running out. Previously, when the outer run was not built, she has suffered attacks by foxes and badgers, wiping out the majority of her flock during the day. Rats are also an issue but since getting this new run, they have been kept out.

Interview Analysis

This section summarizes the main points of research that was carried out for this project.

Overview

We interviewed 7 smallholders from various parts of South East England (Kent, Surrey and Sussex) with one exception being a holder from Somerset. We will refer to them collectively as “the target group” for this analysis and utilise their data to collectively identify common functionality provided in their fowl hosting. We categorised the group based on location and land owned, expressing that owning above 5 acres of land can be considered ‘Farm’ worthy, while anything between 0.5 acres and 5 can be considered rural; finally anything below 0.5 acres is regarded as suburban. All members in the target group had common but often consistent problems varying from predators to going on holiday, with each member handling the problem in a different manner.

There is a clear problem that a lot of the target group cannot leave their poultry alone for extensive periods, effectively citing them as 24/7 jobs. All of the group suffered from previous cullings from attacks with predators while also dealing with vermin such as rats; interestingly one member had severe issues with crows who would break and eat any laid eggs. All members had to feed, water and enclose their poultry every day for safety and wellbeing. As such leaving this responsibility behind for a period of time required a solution such as chicken boarding, a neighbour looking after them or even taking them on holiday with them to ensure they were being well kept. Forgetting or losing interest in the fowl was not an option lest they suffer mistreatment, injury or worse - death. Every member when asked whether they could have some form of automation had positive feedback to this. One member already had an automated solution running on AA batteries (that would enclose the fowl) but conferred their concerns that no feedback was available considering their system had failed previously (batteries died, and one time jammed). Another member had considered buying an automated solution but confessed that they were not comfortable trusting a machine that offered no feedback.

Our client had expressed considerable concern over the various pens that they owned for their fowl. Stating that forgetting to “shut them in was a death sentence” and while an automated solution would be pleasant - it might not be feasible when you have many automated systems working independently across multiple pens, things are more likely to go wrong and require maintenance.

Knowing this, it was clear that a solution could be derived that would inform the client (and others) of the state of the fowl housing. We considered branching out the system to accommodate other forms of fowl, such as ducks and geese. However, from the information gathered, the majority of the target group interviewed had chickens, with one member saying that the ducks returned to the stable when the chickens returned at dusk. Owing to this and the behavioural differences between chickens and other types of fowl we’ve decided for the purpose of this project (and in the interest of our client) to focus on chickens specifically; including their behaviour and routines.

 

Data Analysis

Fowl

Graph showing number of fowl owned per small holder

Only one member of the target group had exclusively ducks with no chickens. Another member had 3 ducks around ~20 hens but otherwise all members had exclusively chickens. The inclusions of ducks did not add much more complexity to the living arrangements for the fowl, instead they appeared to cohabit pleasantly.

 

Fowl Pen/Run

Pie chart showing types of defence used

Based on the location the pen for housing fowl changed. 37.5% of the target group had completely sealed runs which often meshed across the top, bottom and sides of the run to encase it entirely - this was a sure proof manner to stop any predators from getting in and limiting the fowl from escaping. On farms and rural errors, 37.5% utilised electric fences to limit the fowls area as well providing security with 25% of the group only using basic fencing to keep their fowl inside a pen (that often offered very little security from predators). Our client who utilised an electric fence did mention an interest in whether it was possible to receive feedback on the condition of the fence, had it failed or grounded itself anywhere along the line.

 

Housing

Bar graph showing types of run used

There was a larger divide here than expected. 28% of the group who resided on farms and rural areas had converted sheds or stables for their fowl while all those in suburban environments (42%) had commercial housing provided. Another 28% owned bespoke crafted housing such as converted wagons which could now house chickens. Most of the group we interviewed utilised a raising hatch on runners as shown below.

 

Sliding chicken coop door seen at client's:

Sliding door seen at clients place

 

Chicken coop door at Sue Moorhouse:

Sliding door seen at the Moorhouse's place

 

Chicken coop door at Sue Coombe-Tennant

Sliding door seen at the Coombe-Tennant's place

 

They all used a pulley system to raise and lower the hatch to enclose or free the fowl. However, the chicken housing at the suburban areas were more varied, some had doors on which opened on a hinge, whilst others had doors that slide left to right.

Chicken coop door at Suzie Sparrowhawk’s. The main door lifts up and the hutch itself slides left and right.

Chicken coop door seen at Suzie Sparrowhawk's

 

Predators and Pests

Bar graph showing pests

Every member had concerns, issues and nightmare scenarios involving badgers, foxes and rats - with one member suffering frequent crow attacks. The attack frequency had no impact on whether ducks or chickens were housed. 28% of the target group suffered from red mites, parasites which cause a variety of issues ranging from egg production to even death. Dealing with mites normally involves purifying the housing if made of wood, or utilising specialist sprays and earth to cull mite populations. Detecting these would prove rather difficult given to their size and behaviour but still provides a valid point. Those who suffered from red mites tended to be suburban landowners while those who owned an acre or more filed no reports on mites infesting their fowl.

 

Power

Tabularized data on power

 

57% of the target group had mains power, with one person without mains power planning on installing it. Existing products that focus specifically on individual components for automation (see ATX automated feeding and enclosing system) focus on delivery without consistent mains power, opting for a portable but limiting battery solution. Some members (especially our client) were reluctant to use ATXs work as it would not notify them if something had failed, opting for an old-fashioned hands on approach.

An automated 'hop door' and 'feeder' system, together values at around €300.

ATX

ATX Automated Door and Feeder. Source: https://gb.axt-electronic.org/

 

Wi-Fi

72% of our target group had a wireless network (Wi-Fi) within reach of their pen, although 40% of those who had Wi-Fi suffered from extremely poor reception that was not potentially viable for use in a solution; whereas another 40% had adequate Wi-Fi capabilities and 20% had perfect Wi-Fi capabilities. All of the target group had access to a mobile network (3G/4G) at their poultry housing. No clients had a wired ethernet option. This illustrates that a solution cannot rely on Wi-Fi to accomodate all smallholders. Should the WiFi reception go down, the system would be isolated and similar to a pre-purchased solution. Pertaining to our main client Steve, we were lucky that WiFi was available, with the potential for Ethernet in the future.

 

Summary

Whether ducks or chickens, an issue exists in offering protection and affordance to the end user. Users cannot travel away without fear of an automated isolated system failing, boarding their chickens or relying on a sitter to maintain their wellbeing. Predators will be ready regardless of the time or season and are persistent in attacks. Our target group utilises various solutions for housing fowl, whether it’s bespoke or guarded by an electric fence. All of them utilise some form of enclosed structure to protect and limit the roaming space of their fowl, constricting them to a safe boundary. At night they manually shut them in aside one member who utilised an existing automated solution known as ‘Chicken Guard’, which they stated had failed previously.

There are a few existing solutions on the market all of which operate on either a timer system, light sensor system or both. As mentioned the target group had a member who owned an existing solution but was not happy with the outcome; having it fail previously on them. Nearly all solutions exist focusing on automated doors with some focusing on automated feeders.

Our project aims to focus on a broader spectrum that falls within the realm of our client and other smallholders interviewed, this includes feeding, hydrating and securing their chickens. One company (ATX) offered an automated feeder but this offered very little feedback, opting for a LCD display that you would manually read for verifying its current state. Every single solution offered no remote feedback that you could read away from home or the pen - knowing this we need to design a solution that can offer what these systems provide but battle any issues they may introduce; notably a lack of feedback.

With this information ready we are able to suggest an implementation to our initial client, Steve. The system our client requires (and those who may desire such a system) will need to be modular to adapt to various changes in environment. As Steve had mains power within the coop, we did not need to build a system which requires low power consumption, although if we were to push this to a wider market, we would.

Conclusively the system must be modular as Steve is unsure how the coop structure is going to change in the upcoming years. Therefore, it must support various arrangements and allow Steve to plug and play different components when needed. If any of these devices are powered by battery the life should be expected to last as long as possible without inconveniencing the end user. Within this system if the devices are able to be powered by fixed mains, they should be. This will allow for more processing heavy modules and toolsets to be used without fear of diminishing battery life of the system.

 

Requirements

After conducting interview analysis, we drafted up some ideas and a potential feature set of modules for the system. We reviewed this list with Steve, our client.

 

Proposed Functionality

Automated Door

Every smallholder interviewed face issues with safe guarding their chicken coop against the fox. One of the ways they do this is to shut the coop at night. However, this ties the owner down to a certain time every evening to do this. We will research and implement a solution which will automate this and provide feedback, displaying the state of the door.

 

Food Monitor

Many smallholders face issues with chicken feed attracting rats and mice into the coop. A solution we thought up involved an automated feeder or monitor which will alert the user when food needs to be added, or an automated solution which dispenses food.

 

Water Monitor

On the same page as the food monitor, this will monitor when the water level in the drinker is low. After a certain threshold the user will be notified.

 

Temperature Monitor

This will manage temperature inside the coop. From previous research, chickens are known to get "heat stress" which will decrease the production of eggs. By providing a monitor, a user could understand why a chicken has stopped laying eggs.

 

Humidity Monitor

Previous research shows that chickens can experience respiratory issues if humidity levels are high, as this increases the chance of mould growing in their bedding. The mould can then release spores which affects their immune system. By providing a monitor, a user can maintain optimal humidity levels ensuring the health of the flock.

 

Dust Monitor

Dust levels inside the coop must be minimal as both chickens and humans can get respiratory issues when breathing in more than 10mg/m³ over an 8 hour period.

 

Electric Fence Monitor

Smallholders utilize electric fences to deter predators, but some often forget to charge or swap out the battery powering it. By providing an electric fence monitor, the voltage can be calculated to determine if the battery needs replacing.

 

Egg Presence Detection

Inspired by Sue Moorhouse story of the egg thief, a device could be created that alerts the user when an egg has been laid, minimising the chance that a user goes to collect the eggs but none have been laid.

 

Interface to display monitoring data

An interface can be created to display the data from the above sensors.

 

Live Camera Feed

A live camera feed could be implemented so the user can see what their chickens are up to throughout the day. It also allows them a peace of mind that a fox has not infiltrated the coop.

 

Client Feedback

We spoke about these ideas to the client. Steve mentioned he's onboard for most of these ideas, stating that they will help ease the burden on keeping chickens and making sure his flock is healthy. He feels that the food monitor is not needed as he believes you owe some level of care and interation to your pets. As such, he would like to continue manually feeding them twice a day so he knows the chickens are getting enough food. He mentioned there is already a live camera feed of the chicken coop set up by his son so that functionality is not needed either.

With this feedback in mind, we drafted out a requirements document.

 

Requirements

Please see the requirements document

 

Amendments

We were advised to not work with electric fence monitors as working with high-voltages can pose health and safety risks, especially as we're not experts in electronics. As for the rest of the requirements, Steve signed off on the list giving us the green light to proceed.

 

System Design Concepts

Design

After reviewing and agreeing with the client on our requirements, we concepted some ideas which could fit the system.

 

Design Idea 1

A system was envisioned where each sensor operates on their own accord. These sensors will record information at a fixed interval and add the data to a database.

designidea1

 

The visualisation will be able to control the automated door correctly through a means of a web request.

 

Design Idea 2

Another idea we developed was where a main hub controls the actions of each individual sensor by querying for information at certain intervals. The sensors will simply listen for this request. Once the hub retrieves this information, it will input the data into the database.

 

designidea2

 

The visualisation will query data from this database and query the hub for door control. The hub will have endpoints to allow this to happen.

 

Visualisation Choice

As the client wanted to view the system on their phone, tablet and desktop, we opted for a web based solution for the visualisation aspect. Creating iPhone and Android apps (native apps) allow a better integrated user interface to match the design principles of each, integration with the phone's hardware (such as GPS). However, due to the time constraints of learning how to develop Android apps and learning Swift for iOS apps, we came to the conclusion that a web based solution will offer compatibility across all devices whilst minimising research and development time.

 

Selected Design

We have opted to use Design Idea 2 where there is a central control hub which controls the actions of the modular network of sensors and devices. This means the individual sensors will not need internet network access, thus reducing the power requirement. The hub will solicit messages and add these values directly into the database. This also creates a layer of abstraction between the visualisation and the sensors. The visualisation aspect will only need to access the database and query information about the sensors via the hub.

 

Wireless vs Wired

With the above in mind, we sketched up the placement of all sensors in the client's chicken coop. The client had moved their chicken coop at this point so we asked the client for new measurements, however, the overall aim stays the same.

 

Sketch of client chicken run

 

From the above, we realised the water and electric fence monitors needed to be wireless as trailing cables outside are not desirable. Inside the coop, we can use either wireless or wired, however, we opted to use wired as these protocols are cheaper and more reliable. We were unsure whether to host the webserver on the cloud, however, the client clarified they would like this to be hosted onsite.

 

Finished high level design

 

The above is a diagram of our finished conceptual design. With the system divided into modular parts, we then started working on each of these individual components.

 

Modules

Coop Door

Premise

A core concept for chickens is the the need to roost and seek shelter, not only for protection against the elements but natural predators. As such, a flock of chickens require an established shelter (known as a coop) to house them. Unfortunately, this means that there is a point of entry into the chicken coop that needs to be actively managed. Naturally predators such as foxes, make quick work of a weakness like this and will happily slaughter an entire entire flock; regarded as 'surplus killing' or 'henhouse syndrome'.

Knowing this, it is critical that a proprietor of chickens afford them protection, less their lives be at risk. Through our research we have identified various means of security utilised in coops, from sliding doors to hatches and even existing automated solutions; albeit rather crude implementations.

This requirement forces the owner of the flock to be available at certain points of the day, ensuring the flock is kept safe. Holidays, family events or even a working lifestyle become harder to routinely provide for as an owner will always be at the whim of time. Owing to this major constraint our client was keen on this particular module's introduction. As such we aimed to deliver.

The client had a simple request - automate the door. However this task itself is not as simple as closing/opening when the sun goes down. Chickens can exhibit various behaviours individually and as such vary their modus operandi on 'bed-time'; some may simply go to sleep later than others. The module would need to afford this and much more.

Researching and speaking to clients, we discovered that chickens will naturally roost when the weather drops, light decreases and time progresses into evening. As the daylight expands and the sun provides lighter evenings, chickens often stay out later - we could not just assume the same bedtime was always the case. Knowing this, the module had to adapt to the various seasons of the year. It could not fail and if an error occurred or an event triggered then the module must be able to alert the client of such.

We planned an initial design that would utilise light and time to calculate the perfect time to secure the coop, this would ensure that a minor mishap (such as a firework, eclipse etc) would not trick the module into acting unexpectedly.

Final Iteration Overview

The door was a physical frame that could operate remotely and locally with sensors to indicate whether it was open or closed. It was built for display purposes and utilised various technologies to reach its final iteration.

image of completed door

Finished Door Prototype

Overall the module consisted of various parts:

The electronic schematic was designed as such:

schematic of door electronics

Authors note: We are not electronic engineering students and received help with the design and implementation of this circuitry, it may be lacking or not up to standards with conventional commercial grade design and as such we ask for discretion when reviewing the technical aspects of it.

 

Iteration 1 (Research)

As seen in the premise, we knew prior to developing this module the constraints that chickens would need; furthermore we knew that the client would want remote control and feedback of the system. The following details our research behind finding a suitable microcontroller and whether we could incorporate an existing solution into our system.

Existing Solutions
Automatic Door Opener

Automatic door opener is made by Backyard Chicken Coops, based in Australia. The company mainly sell a wide range chicken coops which can withstand the harsh climate of Australia, alongside chicken coop accessories such as our product of interest - Automatic Door Opener and a Solar Electric Fence Energiser.

automatic door opener

Property Description
Price $199 AUD (~£110)
Website link
Power 4x Duracell batteries (type not mentioned)
Conditions to close Timer
Weatherproof "Waterproof to withstand our harsh climate conditions"
Notes Will open and close the coop at set times programmed by the owner The website does not mention if it will automatically close at dusk when not programmed Interface looks like an alarm clock Door can be opened in multiple ways using an optional pulley system Optional pulley system available shown below

The system is able to open and close the coop door at set times programmed by the owner, however, the website does not mention that this system will open at dawn or close at dusk, therefore we can assume this system does not have a light sensor.

Chicken Guard

ChickenGuard have multiple products with variable prices available to purchase.

chicken_guard

Looking at their 'standard' release:

Property Description
Price £114.99
Website link
Power 4 x AA or Mains powered via USB
Conditions to close Timer
Weatherproof "Weather proof casing"
Notes Expensive, provides battery warning, visible from 100m

The overall implementation while secure lacks any major feedback remotely. One of our biggest desires is to introduce remote monitoring that a client can take full advantage of.

Hardware Available

Various boards, SOC and microcontrollers exist that would be suitable, however given the nature of the project we desired a prototyping board; one that could be supportive of our requirements. The board needed to provide various outputs to support our light, time and physical control over the module. The following lists our research into the various boards available.

ATMEL-Boards

The Arduino family of boards are arguably the most common with newcomers to computing and electronics. The boards have a fully functioning IDE which integrates seamlessly with the hardware. The Uno is not alone in the family of Arduino boards, the option to use various others exists such as the Nano, Micro or even Mega (supporting various advanced elaborate features) but the context remains the same fundamentally.

 

Arduino Uno

Image of Arduino Uno

The forerunner for Arduino is the Arduino Uno, a primitive but advanced microcontroller, supporting an ATmega328P microcontroller, a voltage regulator and additional DAC/ADC support with exterior pinouts. If anything majorly goes wrong with the microprocessor, it is simple to pop out and replace with a new one.

Arduino Micro

Image of Arduino Micro

The Arduino Micro is designed to fit on a breadboard and runs on the same microprocessor, ATmega32u4, as another Arduino model, the Leonardo. The main difference in this model is the size, the Micro is one of the smallest boards available in the Arduino family. As the same with the Leonardo, this board has built-in USB communication.

Arduino Nano

Image of Arduino Nano

The Nano is the smaller variant of the Arduino Uno - they both use the ATmega328 and has more RAM than the Micro due to the smaller bootloader on the microprocessor.

ARM-Boards

The option to use more sophisticated ARM boards existed, specifically common off the shelf boards such as the Raspberry Pi, ODroid or even ASUS Tinker board. Price and evaluation aside, a lot of these boards provided a comprehensive fledged operating system (Linux) and the perks along with it. The complexity afforded to this was beyond what was required, furthermore we lacked the requirement of advanced specifications that came with these boards.

schematic of door electronics

However, these boards would come into their own in the future; especially in the realm of the hub. While this option was readily available to us, if we planned to expand into a modular system with remotely powered systems it was not a feature presented on these systems.

Conclusion - Arduino Uno

Without a doubt, the easiest option for prototyping a proof of concept was the Arduino Uno. A board used for a lot of hobbyist projects, with easy integration with a wide range of sensors. Not only does it boast an advanced but easy-to-use system, it provides a well guided IDE for us to develop on. This board was readily available for us to use, and for future work, we could remove the microcontroller itself and attach it to the electronics board itself.

 

Iteration 2

This iteration focused on the interface (buttons mostly), the motor to enable the module and how to determine whether the door is moving, open or closed.

Interface Design

The client requested we provide a form of input physically as well as remotely for the module, lest some part of the system failed - it could be operated individually.

Motor Research

Deciding upon a motor was difficult. Initially neither of us understood the physical components of a motor, nor what the various styles of motor that were commercially available. As such - we researched the various motors used in systems in general.

Actuators & Drivers

Most smallholders have vertical doors on rails that slide back and forth, with nearly all existing solutions relying on these door types to provide their capabilities. We focused on the same door designs and looked at potential solutions for securing the chicken coop.

Nearly all systems that exist on pulling and pushing require gravity and a form of actuator that reels in the component in question. Some systems may utilise magnets, pneumatics, hydraulics and most commonly motors. In the case of our system it was unlikely that we would be using hydraulics or pneumatics, as the expense of maintaining and implementing these was beyond the scope of our clients needs. Similarly, magnets (especially those of electro magnets) would require large power sources to lift and sustain the weight of the door. Ideally utilising a motor (which are also commonly used in existing solutions) was our best choice for both cost and implementation.

Motors

In general motors work on the principle of magnetics by inducing an electromagnet through either AC or DC power (we were working with batteries and mains power - through a transformer at 13 amps, as such we focused on DC power). They utilise a stator, rotor and commuter to perform their rotary capabilities. By wrapping a large coil of wire (stator) around the spindle (rotor) and passing current into the motor’s terminals, the motor will induce a permanent magnet onto the stator and cause the rotor to act as a electromagnet that induces a rotary effect – the commuter aims to keep the rotary effect in the same direction by switching the direction of the current whenever half a turn is made with the motor. Knowing this we could utilise a motor to leverage the mechanism behind opening and closing the door. There are various types of motors that exist and needed to decide which motor best suited our requirements of reeling the door up and down. This task had to allow for precision and control as a mistake could cause the door to fail. Several motors existed that we could have potentially utilised:

Conventional DC Motors

There are 3 types of DC motor that we wanted to investigate, each having various advantages and disadvantages:

Brushed Motor

A very common motor utilised in cheap circuitry – utilises permanent magnets to provide a linear speed and torque owed to the strong magnetic field. Recommended in robotics and servos. They’re efficient and cheap - although suffer from problems involving the brushes inside the motor (which are designed to work with the commuter). These brushes can ‘spark’ and clash with the stator which will wear out and eventually cause more internal resistance (requiring more power over time) and excessive heat. These motors are notorious for being effective but lacking a long lifespan.

brushed motor diagram

Source: https://www.wonkeedonkeetools.co.uk/media/wysiwyg/CID-Cordless-Impact-Drivers-Ruth/CID25/CID-25-2.jpg

 

Brushless Motor

Brushless motors were invented to overcome common issues with brushed motors. In these motors the brushes are replaced with a complex drive circuit which allows for precise synchronisation; in turn allowing for better speed and torque control. It’s conventionally more expensive but is considered a significant improvement over brushed motor. Benefits of this motor include synchronisation to digital clock signals from pole sensors (hall effect or optical sensors). While they’re more complex and expensive they provide higher efficiency, reliability and defined good speed control.

brushless motor diagram

Source: https://www.renesas.com/eu/en/img/misc/engineer-school/fig3-a-bldc-monitor-en.gif

 

Servo Motor

Capable of delivering high torque directly through a built-in gearbox which can calibrated for different speeds – this causes the rotary shaft of the motor to not freely spin seen in brushless/brushed motors owing to the gearbox. These motors often contain a collection of tools within them for error handling and precision, these include a gearbox and positional feedback device. They’re unable to continually rotate at high speeds like conventional DC motors unless modified but can rotate 180 degrees in either direction allowing for accurate angular positioning.

 

Hub Motors

Conventionally like a normal DC motor except the inverse operation happens, whereas normally a DC motor has a rotary spindle in its rotor with a static stator on the outside – a hub motor has a static rotor in the central with the outside stator being the rotating element. Essentially the outer component rotates rather than the internal spindle. These inverse motors are commonly used in electric cars and have many benefits for such designs. For the purpose of our project they would not necessarily be any more useful than a traditional DC motor and actually have issues involving torque. These issues exist due to their design in which they are used in wheels on cars or bikes. The issue with torque is resolved by increasing the size of the stator and rotor however this is overkill for our project which does not intend to drive wheels in revolutions and aims to simply lift a door about 30cm.

Induction Motors

These motors are driven by AC as opposed to DC, they’re technically a lot more sophisticated than a conventional DC motor as they encompass a ring of electromagnets around the outside which compromises the stator. By powering the stator, we can induce Lenz’ law and Faradays law which encourages the rotor to spin endlessly trying to reduce the difference between the two potentials of the stator and rotor. While incredibly simple with one moving part (very low-cost) they are not useful to us in our project as we intend to utilise DC instead of AC.

Linear Motors

These motors induce an electric current that forces motion into a linear context. The stator which usually appears on the outside on a conventional motor now focuses on a single straight below the rotor. This pulls the motor forward along the stator. Since a linear motor utilises induction which relies on AC power - we can’t consider this for our project although they are incredibly powerful and fascinating motors.

Stepper Motors

A staple in 3D printing, stepper motors have absolute precision owing to their quadrable magnets aligned to present a near perfect rotational field of power. Two pairs are provided that alternate their polar magnetism to force the rotor to rotate around the physical magnetic field. As such, a lock and hold is present. These motors are commonly used in systems that require precise precision.

Conclusion - Stepper Motors

We felt that this style of motor would best suit our intended design as it allowed for exact steps to be recorded. Furthermore we could utilise this to vary the style of door produced. Whether it was a standing or sliding door.

However - while in our infancy we decided this choice would suit the requirements of the module, we later discovered several serious flaws that undermined the design and posed a serious risk to the structure of the system. The motor design choice was retrofitted in iteration 5, while this is much later than we would have anticipated, it's purely a point of ignorance - we did not realise the flaw even existed until then.

We decided to uitlise the A4988 stepper driver motor to support our decision, accompanied with a NEMA 17 as the motor itself. Both of these are powerful, relatively cheap and commonly supported stepper motors that would integrate with our module design.

Sensor Research

We had to guarantee that the door was closed, open or moving - assuming the latter, it may have been stuck even. The design choice behind this was to provide the end-user precise and exact information rather than assumptions; knowing the position would be critical for feedback.

Micro Switches

Initially, we came up with a simple mechanism for interpreting the modules 'closed' state. When the weight of the door fell upon these switches, they would close and send a high signal to whichever device needed it.

micro switch solution

This design while simple and cheap, affords several problems:

After speaking Dan and Keith, they suggested we look into reed switches and Hall Effect sensors.

Reed Switches

A reed switch is a magnetic switch that simply opens or closes a circuit based on the nearby magnetic field present. Laymen terms, provide a magnet near the sensor to trigger it's behaviour.

reed switch diagram

Source: http://www.chicagosensor.com/images/HowItWorksReed.jpg

 

Unfortunately reed switches while cheap and simple suffer from damage easily, being made out of glass. Simply mishandling can break them entirely - especially under vibration and with motor involved this could cause severe issues. Given their mechanical nature, they can also naturally 'seize' and lock into position severely limiting their lifespan.

Hall Effect Sensors

A similar component to a reed switch, operating on magnetism and providing a clear open/close circuits. Their range is limited to no more than 10mm and requires a fairly strong magnetic field to switch. These sensors have a positive and negative connection alongside a 3rd connection for signal. Compared to the reed switches, more industrial weatherproof sensors are readily available that can withstand severe conditions.

EEPROM

Arguably, one could utilise the EEPROM to store the current position of the motor in increments. EEPROM provides a store of memory that can be accessed regardless of memory state. Whether the device has since restarted or operated for days, the values stored in EEPROM are verbose and persistent. Utilising EEPROM is costly, not only is there a possibility for transistor lock but the current cost required in ampere can be severe in the long run; especially if we designed a battery powered system.

Conclusion - Hall Effect/Reed Switches

Owing to a proof of concept we decided to utilise Hall Effect sensors due to their capability to withstand harsh climates.

 

Iteration 3

Having agreed on the motor and sensors required, we pursued our interests into modular and standalone design. The module needed to operate alone upon a potential network failure, it was critical the door functioned as intended regardless of the state of the door. The module had to provide a clear individual set of sensors to work within the boundary of a standalone system.

Self Sufficiency

Light Sensor

Given our lack of knowledge in the field, we spent a portion of time dedicated to understanding and researching various light related sensors. Cheap and commercially available ones are on demand and the two that caught our attention the most were photodiodes and light-dependent-resistors. In the intended system either would have sufficed. Given the materials available to us we utilised a photodiode chip to work on our behalf.

Real-time clock (RTC)

Unfortunately, the Uno did not provide a native clock to keep track of time, as such we provided a RTC component to handle this for us. Various RTC modules exist but the simplest one is the RTC DS1307 which while limited provides the core functionality necessary for the module to operate. Various other clocks exist and could have provided slightly better alternatives see the future work section. As the module had to know the relevant time in order to operate, it was imperative we had a mechanism to keep track of time.

This RTC in particular was readily available in the Shed for a project like this and saved us the time and resources on gathering one ourselves.

Power

Simple but effective - our main client had mains power at the ready. We took full advantage of this as utilising batteries and other features would have drawn our attention elsewhere.

We acknowledge that many smallholders do not have this luxury but under our circumstances, due to being a small team, we wanted to produce a working door first. After, if time allowed, we would like to research and develop a low powered solution which accommodates the needs of all smallholders. We did utilise a battery pack for our module motor in future designs though, albeit this was purely to distance ourselves from requiring a 12v power supply to test and operate the door.

Door Design

The primitive design was modelled of all existing client solutions. A door that could be raised/lowered upon command or event. The door itself had to be sufficiently sturdy and resistance to warping, a command feature of wooden doors that vary their structure based on weather.

We utilised plywood to build a frame, an aluminium door itself to replicate the existing structures (and avoid aforementioned warping).

initial door design

When constructing the frame of the door, we joked that the device looked like a guillotine, so we engraved the text on. In hindsight, we felt this was not professional and proceeded to cover it up for future demonstrations.

 

CAN Bus integration

When initially prototyping the door, we opted to use Serial as a way of communicating between devices.

 

Whilst this worked primitively, Dan Knox suggested we look into CAN bus and other communication protocols. He suggested CAN bus as it is more robust and allows solicited messages. For more information about this, please see the networking section.

 

Code design

Using C and C++ as individuals from a Java taught environment posed a slight challenge, although we quickly rose above this. The bulk of the source can be found here. We designed the system with the intention of abstracting low-level logic away from maths based decision making, i.e. when to operate the door, consider the network and act as a host.

We designed several handlers to operate as managers for the module:

MotorHandler

The premise of the motor handler class was to maintain the relationship between hall effects sensors and motor. The handler would provide as much feedback as plausible while restricting the door if a decision was made that may damage or impact its performance (such as requesting a closure when its already reporting closed). The main operating loop of the microcontroller could poll the status of the door and if necessary open, close and retrieve the state of the door.

ControlHandler

The control handler needed to be able to support the RTC and photodiode, providing abstracted functionality such as get_light_level() without much complaint. It would also need to allow for an updating measure provided to the RTC given the nature that it may 'skip' time by a second or so a week. Eventually over a period of months, the RTC would become more and more out of sync, as such a mandatory update of its time-reference was necessary to avoid this.

CANBusHandler

A library was already present for the CAN bus integration on Arduino, however to simplify matters we introduced a wrapper that would sit between our network design and the hardware interface.

The handler would provide addressing schemes and message objects to easily wrap the contents in.

 

Iteration 4

Motor change

An issue arose during development of the board involving the motor. We consistently found stepper motors failing on us, with 3 in total failing. We could not decipher as to why this issue arose - arguably it could have been a current draw, short or possibly worse. However for hours on end the motor would operate consistently as expected until it looses it's torque.

Another issue we encountered with the stepper motor was the need to be constantly powered to maintain its position, meaning that if the door was in the open state, the stepper motor would heat up considerably.

Another minor issue arose when we discovered that the motor would slowly slip over time, lowering itself from the hall effect sensors and awarding a false negative to its location.

While a stepper motor offers consistent accuracy and consistency over modern motors it lacks any sufficient locking mechanism and as such if a power failure or fault occurred could drop the coop door like a guillotine; harming or sealing the chickens from access. With this in mind we had to discover a new motor or electronic alternative that could rectify this problem.

Through research we discovered what is known as a worm drive motor, a motor powered by a unique gearbox mechanism that provides a self-locking design. If the power fails or a wire disconnects, the coop door does not change position rather locking itself in place. However with change comes disadvantage. We found that the RPM was slower, heat was produced faster and the pure cost was higher than anticipated. Regardless - the change was a successfully changeover.

We incorporated our new DC motor with supported worm drive gearbox into the module design and found immediate success. The door design itself had to change, with the original motor taking a large thread and spanning across the top frame. The worm drive simply adhered with M4 screws into the top frame.

Along with the new motor, the original A4988 stepper driver motor had to be changed. It only supported stepper motors and a conventional DC motor was not going to make the cut. Fortunately a well supported standard known as a H-Bridge driver was available to us with various development boards readily available. The L298N development supported dual motors, a voltage regulator, speed control and polarity.

Diagram of the completed door schematic:

 

initial door design

 

Portable Power

With the intention of displaying this at the project fair, it was neither safe nor predictable whether we could provide consistent mains power to operate the board and motor. We decided to utilise a set of standardised AA batteries (operating at 1.5v) in series to provide ~12v to power the system.

Buttons

The client had requested some physical control on the module itself, we could not purely rely on a networked solution encase the door suffered a network failure. We utilised two buttons in coordination with a pulldown resistor to operate on interrupts within our code. Concern was raised as to whether the chickens may accidentally peck these buttons, as such we incorporated a rapid triple touch feature to overcome this concern.

 

Final Iteration

Code Snippets & Source

The entire source can be found here.

As mentioned prior, the door was designed to be modular and this extends to the code implementation which offered abstracted handlers for various roles. By breaking down the core responsibilities into singletons, we managed to simplify our testing arrangements and design procedures.

It also allowed the code to flourish in readability and presentation, a simple no-nonsense command could be issued with little requirement of impact or knowledge of the backend; a black box.

Below are examples of such confined but rewarding functionality.

ControlHandler

Get the current time as a 32bit integer and the light level averaged.

 

MotorHandler

Move door and validate:

 

Status may return 6 possible values:

 

CanbusHandler

 

Client Installation

At this point in the project timeline we approached Christmas of 2018, our prototype system was working as intended and designed to operate within the boundaries of the client. We did not install the system directly, rather providing all resources (hardware and software) for the client. They would not have the CAN bus integration provided but a individual module operating solely. The client was pleased with the solution provided but did note feedback for future design.

Note: the client owns various 'runs' for their chickens and the initially prototype deployed was in a secure run that was monitored remotely, if a serious fault occurred it was dealt with immediately. The client secured the device within a IP rated box to the the side of their desired coop and handled issues we had not delved into (owing mostly to time constraints) such as weather proofing the system. As of this report (March, 2019), the module is still operating as intended.

 

Known Issues

Below are various issues that we noted throughout the modules life experience.

Power drain

The current solution is lacking a consistent power source. Most smallholders do not have power onsite or within range of the coop and as such opt for battery operated solutions. Our system, while a prototype would not accommodate this. COTS solutions often operate on batteries and we should look to emulate this design in future.

Floating buttons

Unfortunately, our buttons suffered a side-effect known within electronics as a 'floating voltage', a voltage that without a grounded connection will arc and ground where possible; often to a unwarranted GPIO. While we tried to handle this problem with pull-down resistors it appears to have not be incorporated correctly. Even with the triple-click feature interrupt, there is a chance that the door will close/open itself.

Material, Design and Use

While the prototype was not intended for the deployment, it is in theory a poor choice. Wood has the potential of swelling under rain and adjusting its dimensions; jamming the door. We did opt to utilise an aluminium door to hinder this but never tested our system within a real environment (outside the clients coop, who had their own door).

 

Future work

The door was a major pinnacle for the system, it could not fail and had to be self-sufficent upon a network failure. While we provided a concise module, there were improvements if given the time that would have worked to improve the module design.

EEPROM Management

We could have utilised the internal EEPROM to save information regarding the door state if a network failure occurred. EEPROM does have a limited lifespan but this would most likely outweigh the life of the physical electronic components. Information that could have been saved would have been data such as:

Battery Powered, Relays and RTCs

The RTC used was a DS1307, it lacks true sophistication when compared to the DS3231; an RTC with alarms and more. Had we utilised an alarm operating RTC with relays for power management and sleep parameters on the board then its very plausible that the door could have truly been remotely powered without mains. Given our client provided mains power onsite this was not a direct requirement and would have drained our resources further.

The Arduino Uno while an excellent prototyping board is limited to just that, a prototype. In true deployment we could not have utilised this for major management of a implementation. A different board, SOC or even PCB could have been designed for further management - then utilising an interrupt on said system with an RTC and Relay to operate power to the motor.

Update Hard-coded Door Cutoff Time

As it stands the door has a hard-coded cutoff time, that is a time at which it will open or close irregardless of the other settings within this system. At present this value is the nearest hour after dawn or dusk and updates month to month. In the future it would be nice to provide the end-user the capability of editing these values themselves and updating the door accordingly.

 

Coop Air Condition Sensors

Premise

From the research conducted, it was noted that a high humidity content inside the coop are bad for the health of the flock. When roosting at night, the chickens release moisture from their breath, increasing the humidity in the coop. This is unhealthy for the flock as this can lead to an abundance of mould growing in their bedding. The mould then releases spores which the flock inhale and leads to sickness. Humidity above 75% will also decrease egg production.

It is also noted that low temperatures will decrease egg production as the energy that went towards producing eggs is now spent warming themselves up. High temperatures will result in the birds suffering from heat stress which causes a lower growth rate and reduced egg production.

Poultry dust is also a major concern in the industry as this can cause respiratory issues such as asthma in humans and regulations state that for the health of the poultry, inhalable dust must not exceed 10mg/m³ over an 8 hour period.

With the above health concerns in mind, we have created a device which monitors temperature, pressure, humidity and dust levels inside the coop itself. This should allow the smallholder to recognise these issues occurring and act accordingly.

 

Final Iteration Overview

Image of completed air sensor

This device was a simple monitoring solution which transmits data to our main hub by utilizing the CAN bus. The data transmitted consists of temperature, humidity, pressure and dust levels from inside the coop.

Overall, the specific parts of this module consist of:

 

Here is a Fritzing diagram of the complete circuit.

Wire diagram of air sensor

 

Iteration 1 (Research)

From the initial premise, we knew we needed sensors to monitor temperature, humidty and dust levels. Dan Knox recommended we look into the BME280 which has sensors for temperature, humidity and pressure, alongisde thte DHT11/DHT12 sensors.

From our research, it is noted that the DHTxx range require pull-up resistors and values can only be retrieved every 2s whereas the BME280 sensors had more accurancy in readings in regards to the humidity. As both these sensors suited our requirements, we opted for the BME280 sensor as this was readily available in the Shed and required less electronics knowledge.

When we approached Dan for guidance on sensors, he mentioned he had some GP2Y1010AU0F available in the Shed. We found that this sensor is one of the more common dust detectors available.

There is a newer model named GP2Y1014AU0F which operates in the same way as the one we are using but has improved accuracy (± 15% vs ± 30%). We decided to use the older version as this is what was available in the Shed and it would take more time we did not have to order the newer model in.

These sensors work by having a photo-detector and a LED emitter inside the unit. When dust enters the hole, the light scatters, which means the more dust there is, the greater the intensity of the scattered light. The circuit we need to build will need a capacitor and a resistor, which allows the LED to pulse on an off, rather than having it constantly on. This is because the LED's intensity decreases as it ages, and therefore pulsing will help extend its lifetime.

We decided to use the Arduino Uno due to the reason outlined in the door module section.

 

Iteration 2

Once we decided which sensors to use, we researched how to interface these separately with the Arduino. There were libraries provided by Adafruit for the BME280 available on the Arduino library manager which allowed easy integration with this sensor and guides which helped us wire the sensor up. We opted for the I2C Wiring as the instructions used less pins on the Arduino.

We had more issues with the GP2Y1010AU0F dust sensor as there were more connections compared to the BME280 and resistors and capacitors needed to be added, however, we found a diagram displayed below, which helped us wire the system up.

circuit diagram of sharp dust sensor

Source

 

The example code provided by Sharp was difficult to understand as it reads voltage levels as an input and converts it into a reading with the unit µg/m3 . The formula is as follows:

 

Once we proved that both these sensors worked separately, we started work on integrating them into the same circuit.

 

Iteration 3

Integrating both into the same circuit and combining the code together was relatively simple, however, we realised that the functionality of the dust sensor can be moved into a custom library to keep the main Arduino sketch neat and easy to read. Therefore, instead of doing the calculation for the dust levels inside the main sketch, a simple call to dustSensor.get_dust_density() was all that is needed to return the value.

 

/*
* Example showing the dustSensor library - a call to get_dust_density()
*/
uint16_t get_dust_readings() {
  float dd = dustSensor.get_dust_density();
  uint16_t dustDensity = (uint16_t)(dd);
  debug("Raw dust: ");
  debugln(dustDensity);
  return dustDensity;
}

 

Final Iteration

picture of two arduinos connected via CAN bus

 

After cleaning up the main sketch, we started to implement CAN bus into the system. Using the aforementioned CANBusHandler library we created, we were able to send data between two Arduinos, with the packet structure detailed more in the networking section.

 

serial monitor displaying received data on CAN bus

 

Once we were certain that the circuit works as expected, we soldered the components to a strip board with the help from Keith Greenhow and Mike Berry. The code for this component can be found below:

air_tester.ino

coop_condition_monitor.ino

 

Dust sensor library:

 

Known Issues

There is currently one known issue with this device:

As the device operates using a fixed period (5s) loop, if the hub solicits readings from all sensors on this device at the same time, only the last solicited message of that period is recognised, this results in only one of the requested values being returned. In the current implementation the workaround for this is staggering the requests made from the hub such that it is guaranteed that only one request is present per period. In the future it would be ideal to remove this artificial delay and buffer the requests received from the CAN bus on the device.

 

Future Work

Future work could be done by researching if it is possible to make this system low powered to accommodate the requirements for all smallholders, not just our client.

 

Egg Identification

Premise

Unlike some of the other issues looked into, like automation of the coop door and air condition monitoring, egg identification does not provide any benefit directly to the chickens, it's more so an ease of life feature for the smallholder. Lots of the smallholders that we spoke to stated that one of the benefits to owning chickens is that you don't just give them a decent loving life, but you also obtain responsibly sourced, free range eggs.

That being said, the idea behind this feature is to provide a system that would inform the smallholder should any eggs be available within their coop to collect. It's not so much of a problem to check whilst the weather is nice but come rain and snow we could see it becoming tedious if when you do go to check, nothing is available. As such we thought this would be an interesting problem to tackle.

 

Final Iteration Overview

Though definitely still in the prototyping stage, we have developed software to train and classify images of eggs using a classification neural net. We have developed a client-server architecture such that a simple client device takes an image and sends it to an authenticated REST endpoint on the server for processing. The decision to use this architecture was not by choice, but necessity. The time required to predict the category of an image is just not feasible on a standard CPU and requires offloading to a compatible GPU. For our solution we opted to use the open source deep learning library PyTorch, alongside an Nvidia CUDA core supported GPU, a GTX 1060.

A simple device in the coop, currently built using a Raspberry Pi Zero W, has an attached camera that points at the nest box. When a request is received from the control hub over the CAN, this device takes an image and sends it off for external processing. When a classification is received back from the server this is returned to the control hub.

As for the current accuracy of our solution we have achieved a result of around 85 - 90%. This however, should be taken with a bit of caution, as these results were achieved in testing within a controlled environment. The training data used contains around 300-400 images, with approximately 75% taken by us and the other 25% obtained via Google image search. An interesting revelation that we noticed whilst at project fair was that it seems the network trained heavily on the shape features of the eggs and not colours, this was realised when it accurately identified a foil wrapped chocolate Easter egg.

 

Here is in image showing five cycles of retraining the neural net against the training data and then checking against the testing images. As can be seen we achieve the results of: 0.861.., 0.875, 0.814.., 0.840.. and 0.871.. against the testing images. This gives us a mean accuracy of approximately 0.852 or 85%.

Training NN results

 

Below is an image of our test environment nest box; we tried for some level of accuracy on replicating the inside of an actual coop. The walls are wooden and the floor coated with straw.

Egg identifier nestbox

 

Iteration 1 (Research)

Before setting out to build this device we need to look into what tools are available for performing this kind of task. From having looked into this field in the past, after seeing an article about a Japanese cucumber farmer classifying cucumber types, we know that these types of tasks can be solved using machine learning. So that's where we will begin.

 

Looking into the available tools the two main machine learning frameworks we could find were, TensorFlow and PyTorch. TensorFlow is a machine learning platform developed primarily by Google and is the most popular framework available for this kind of task. TensorFlow has a huge community and an equally large learning curve. It is a complex tool that allows you to perform incredibly complex operations. On the other hand PyTorch is open source software developed primarily by Facebook, it is somewhat smaller than TensorFlow. PyTorch is still able to perform complex machine learning tasks but it does lack some of the features available for fine tuning neural nets that can be found in TensorFlow.

Reading a blog post written by Yashvardhan Jain, he concluded that TensorFlow is much better for production models and scalability whereas PyTorch is easier to learn and lends itself better to creating simple prototypes, passion project and proofs of concept. He states the smaller learning curve in PyTorch compared to TensorFlow is down to the more intuitive pythonic style of programming. His blog post can be found here.

To get some experience with how these different frameworks performed example classifiers were looked into for each. For TensorFlow a dog breed identifier was used, as for PyTorch a flower classifier. Both performed well and had quite good accuracy when identifying their respective types. As for code comprehension, fitting with what Yashvarhan said, the more pythonic style of PyTorch made it much simpler to understand.

In the end we decided to go with PyTorch as our framework of choice, though the community is smaller, the more python like syntax and ability to quickly prototype sold it as the better choice for our current purpose. When we have created a working prototype and proven that the concept works, then we should consider swapping out to TensorFlow for production.

 

Iteration 2 (Building the NN)

Now that we have decided what we are going to use to construct the classifier we should start looking into how to implement it, for the initial stages this will simply be seeing if we can train and predict from our own neural network.

 

Using PyTorch's beginners tutorial on image classification we were able to create a simple network that returned the probabilistic likelihood that an image was either an egg or a plane. This approach uses a pretrained neural net as the base and as such only the last N layers are retrained to classify our desired images. We chose the comparison of plane and egg to convince ourselves that some level of identification was happening. Using a classification neural net trained on two images so dissimilar to one another it had to provide somewhat accurate results. If it did not the likelihood was that we must have done something drastically wrong within our code. It turns out that this classification worked.

We got an average of around 97-98% training accuracy and here are some of the results our classification neural net produced. For the below images Index 0 was the egg training set and index 1 the planes set.

 

Early egg classification results

[Index 0.0, Classification: 1.0]
[Index 1.0, Classification: 3.7954744902535253e-10]

 

Early plane classification results

[Index 0.0, Classification: 0.0006500619929283857]
[Index 1.0, Classification: 0.9993497729301453]

 

The first image was successfully classified as containing eggs with a 100% prediction rating. The probability of being a plane was less than one billionth. As for the second image it was recognised as being a plane, index 1.0, with 99% certainty. The egg receiving a prediction rating of 0.06%.

As can be seen from the above results we had achieved a somewhat working solution, ignoring the fact it recognises planes and eggs.

 

Iteration 3 (Training the Network)

Now that we had created a working demonstration of the neural net successfully classifying two distinct objects, the next step was to train to network to recognise the presence and absence of eggs within similar environments. To do this we are going to need to produce a set of training and testing data.

 

We decided it would be best to setup an environment somewhat similar to the conditions that would be experienced if this system were to be deployed in real life. We built a small rig using a wooden basket and hay purchased from pets at home to mimic the inside of a coop. Using test data that shared this similar scenery we hoped to train the neural net on the features of the eggs themselves rather than the surroundings.

Fake coop

 

To avoid over classification of a certain size or colour we attempted to use a variety of different eggs alongside image preprocessing to help normalize the images. The range of eggs we had available to us during training can be seen below.

Egg types

 

As for the image preprocessing this is performed on all images prior to classification, there are two reasons for doing this. The first is to avoid over classification when training, attributes like orientation and brightness are pseudo-randomly altered using rotation, mirroring, normalisation and other transformations. This helps avoid over-training on these variables features. As for the second reason, the neural net is setup to accept images of a certain size, as such cropping and scaling are done to assert that the images are in the correct format to be accepted.

 

Below is the set of transformations that are applied to the training data. The parameters of the normalisation function, Normalize([..], [..]), as well as the resized crop, RandomResizedCrop(224), are set as they are due to the pretrained neural net we used as a basis for our image identification, 'VGG16'.

# Transformations used on the training data.

data_transforms = transforms.Compose([
    transforms.RandomRotation(90),
    transforms.RandomResizedCrop(224),
    transforms.RandomHorizontalFlip(),
    transforms.RandomVerticalFlip(),
    transforms.ToTensor(),
    transforms.Normalize([0.485, 0.456, 0.406],[0.229, 0.224, 0.225])
])
return data_transforms

When we are processing an image for prediction we simply centre crop it to the size accepted by the neural net and again apply normalisation.

 

Below is a sample of the training data used for our neural net alongside the classification of an image with an egg present and another absent.

 

Subset of training data

Subset of training data used within neural net.

 

Egg classification results

[Index 0.0, Classification: 0.8653683066368103]
[Index 1.0, Classification: 0.13463164865970612]

 

 

No egg classification results

[Index 0.0, Classification: 0.04247020557522774]
[Index 1.0, Classification: 0.9575297236442566]

 

Iteration 4 (Client-server Architecture)

Now that we have produced a working prototype of a neural net and it classifies the presence of eggs with reasonable success rates, we can start looking into how to integrate this into the wider system. We are going to need to build a harness that when a request is received to check for eggs within the coop it proceeds to capture an image, classifies it and returns the result.

 

As the device is going to be based inside the coop itself it will be connected via CAN bus to the other modules. For now we simply need to know that it will use a logic level converter connected to an MCP2515 to communicate on this bus. To learn more about the integration of CAN visit the networking section here.

Knowing the device will be connected via CAN bus we need to write a simple CAN interface that allows the device to receive and send messages. The implementation of this is a slimmed down version of that found within the main control hub, it asynchronously listens to the bus for messages and passes these to the correct handlers when received.

A single module has been created that is responsible for capturing the image, having it classified using the neural net, and then returning the result to the caller. When the saved image becomes stale a simple garbage collector will remove it.

The implementation of the garbage collector and main function for classification can be seen below.

# Function for taking picture, predicting result and responding to the hub.

async def main(self):
    """Check coop for egg presence and return result."""
    
    try:
        success = await self.take_picture()
        if success:
            class_idx = await self.predict()
            res = self.labels[str(class_idx)]
            print("Prediction result: {}".format(res))
            await self.send_to_hub(res)
        else:
            print("Failed to take picture.")
    except:
        print("Failed predicting if eggs or not.")
    
# Image garbage collector.

async def img_garbage():
    """Garbage collector for removing stale temp images from the filesystem."""
    
    while True:
        await asyncio.sleep(300)
        ts = await get_time()

        for f in glob.glob("temp/*.jpg"):
            mtime = os.path.getmtime(f)
            if ts - mtime > 15:
                os.remove(f)
 

 

When transferring this code from a desktop PC to the Raspberry Pi Zero we ran into a few issues getting PyTorch installed, it appears that not a lot of developers are running these frameworks on small SOC and as such there is little support for ARM-based devices. There were a couple of guides online but mostly unsuccessful, this solution from Amrit Das was well rated but we still were unable to get it work. In the end we ended up compiling from source, which on a Pi Zero, took an unbelievable amount of time.

Once installed we started the software to make sure it functioned and we immediately ran into another issue. We were unable to get the network to perform adequately within our RAM limits or perform in any acceptable time-frame. In fact the behaviour we witnessed was simply waiting a long time for the software to crash; we never made a single working prediction. We quickly came to the realisation that this was not going to be feasible operating on a Pi Zero.

 

We were determined not to let all this research and time go to waste, we just needed to come up with a solution that provided us the computing power to use the neural net. We decided on making the classification of images an external service. The device in the coop will only responsible for taking the picture and contacting this server with the image to be classified. The server will receive this image via POST request and perform classification on it, this classification response will then be returned to the client. Below is a simple diagram of the architecture we envisaged. This setup should allow us to use much more powerful hardware.

 

Client-server architecture egg identifier

 

Having just faced an issue with lack of computing power we looked into how well the neural network performed when only a CPU was present, we were surprised at how poor it was in comparison to when a GPU was utilised. We managed to find online some benchmarks of a Dual Xeon E5-2630 V3 CPU vs a standard Nvidia TITAN GPU, and the Xeon came in at just over 14x slower. This comparison can be found here. We decided from this that the machine running this service will require an Nvidia GPU such that CUDA cores can be utilised, speeding up performance.

This client server model was implemented and works well. The processing happens in reasonable time and the responses from the neural net are somewhat accurate. To view the code for this module click here.

 

Known Issues

There are a couple of known issues with the current implementation, one is the scalability. As it stands there is no server infrastructure running the classifier. If we were to support and manage the egg classification servers ourselves we would require some sort of subscription fee to help support running costs as well as staff. Having a singular client, as we do for the scope of this project, this seems unreasonable. However, so does the alternative, having the client run and manage it themselves. One solution would be to look into how much it would cost per client to run the server(s) on Google Cloud.

The other noted issue with the current implementation is that when using a high fidelity camera the post request is ignored due to image size. This is because the web server is configured for a max POST request of 5mb, this should be an easy fix however. Adding client_max_size = custom_max_size to the launch parameters of the web server should allow a custom upper limit to be set.

 

Future Work

There is a lot of future work that could be done to make this system more robust and accurate. As stated before this device is a proof of concept rather than a fully featured product. Below is list of some of the future work that could help to improve this software.

 

 

Water Drinker Monitoring

Premise

Like humans, Chickens require sustenance like food and water to survive, and again like humans, they can go much longer without food than water. A chicken without access to this necessity may start to decline in health within twenty four hours, and within a few days dependent on environmental conditions, may perish. As such it is paramount that this key resource to the Chickens health is readily available.

In an attempt to help smallholders flocks from having to endure this potentially life threatening situation, we have set out to develop a device for our system that monitors the presence of water within a drinker.

 

Final Iteration Overview

Out of all the devices within the system this one is the most simple. Utilising an off-the-shelf device produced by Xiaomi for their Smart home suite of products, we are able to detect a change from the presence of water to no water. This small wireless device has a supposed battery life of two years and an IP67 rating.

 

The wireless Xiaomi Aqara water sensor can be seen below:

Xiaomi Aqara water sensor

 

This device functions via two small contacts on its reverse that when connected form a complete circuit. When this circuit transitions from being complete to broken or vice versa the device sends out a message over ZigBee. Using a ZigBee USB dongle we are able to receive this message and output it to MQTT using an open source library, Zigbee2MQTT. The MQTT topic published via this library is then subscribed to via the main hub of our system and voila, a notification system for low water levels has been built. To learn more about how this information is received via the hub please click here to visit that section of the corpus.

 

Iteration 1 (Research)

Our first step was to look into whether there were any devices that could perform the task we required without us having to build a bespoke circuit. Speaking with our project supervisor he suggested looking into the Xiaomi Aqara water sensor as he had some personal experience using them within his own home. This is where our research began.

 

We looked into the Xiaomi Aqara water sensor that was suggested by our project supervisor and found that by default this device requires access to the Xiaomi smart home bridge to function. The device outputs a data packet over ZigBee and this is decoded by the bridge before being accessible within the provided mobile app. This would have been a pain to deal with, however we found a library that is able to parse the Xiaomi ZigBee packet and output it to MQTT, that library can be found here. This product would only suffice as a proof of concept however as due to the IP67 certified rating it cannot be submerged for prolonged periods of time.

Another similar device we looked into was the Samsung SmartThings water leak sensor. This sensor cannot be submerged at all and hence would not suffice for building a prototype for this project.

The last sensor we looked into was the EZVIZ T10 Wireless water leak detector, again like the Xiaomi Aqara this device has been certified with an IP67 rating. It could be used for prototyping within this project but would not suffice for actual deployment. No information about how this device communicated could be found, but going by the fact it required connection to a central hub and had an estimated range of 100m, we are going to assume it uses ZigBee. No information could be found as to the whether the packet was encrypted, and if so, if there was any way to decrypt it.

 

For the prototyping of this device we have opted to go with the Xiaomi Aqara. The Samsung device is an immediate no due to the fact it cannot be submerged and the EZVIZ T10 has very little available information about how it communicates, and as such may just end up being a waste of money.

 

Iteration 2 (Implementation)

As we have opted to go with the Xiaomi Aqara water sensor we simply need to follow the setup steps outlined on the ZigBee2MQTT library documentation.

 

The first step of setting up this device was to flash the USB Zigbee Receiver (CC2531) with the appropriate firmware. As we did not have access to a CC debugger, our project supervisor did this for us. Once this was done it was simply a case of setting up a Raspberry Pi Zero with this receiver connected, and running the required Node project. On the Pi we installed Node, cloned down the project repository, installed the required dependencies and set the appropriate configurations. We installed and ran Eclipse Mosquitto which is used as the MQTT broker. The last step was to pair the device, this was simply a case of holding down the reset button until the blue light blinked three times. Once this was done the device was all setup and the main control hub should be able to subscribe to the configured topic.

To learn more about how the hub works with this MQTT broker please click here to visit that part of this corpus.

 

Chicken Location Tracking

Premise

Being able to locate any pet is a major task in its own regard, let alone a flock of them. If a client could accurately monitor the location of their flock they could provide ease of mind, especially when away from home. The mechanism to do so could vary in technicality and capability, from a simple flashing LED to sophisticated historical data gathering of behaviour and locale.

The actual complexity of this module in its full encapsulation could have been a project in its own regard, however the simple but effective solution provided is often all the end user needs and in the end acts as a proof of concept and nothing else.

 

Final Iteration Overview

While primitive, the work we present as our final overview is a standalone proof of concept, detached from the centralised hub. Effectively incorporating an ESP32 development board with a series of LEDs attached to various GPIOs, set high (or low) based on the location settings. Utilising a BLE beacon we simulated a 'ring' that could be attached to a chickens foot that would broadcast its location. The beacon had a unique ID and could be identified by the module itself. We demonstrated the working concept at the project fair, a simple range test; whether it was identified or not displayed with LEDs.

Visually the module would utilise LEDs to display the state of the chicken location, if it was within range then a blue LED would be switched on; else it would switch off. We did not incorporate this module into our network as it was a proof of concept we looked into, but never fully developed.

 

Iteration 1 (Research)

Wireless detection mechanisms

Using wired technology to locate a chicken was beyond the scope of reality and the project, an unfeasible solution. As such we opted to find a low powered, long lasting and cost efficient alternative. Obviously without wires we looked towards wireless alternatives. Immediately several came to mind.

BLE

Bluetooth Low Energy, a technology utilised in various systems across the world, from smartphones to microwaves was an optimal choice. Operating on the OFCOM free band of 2.4GHz but at a lower cost, it was a feasible choice for implementing a proof of concept. However the power and demand behind such a technology could vary and the likelihood of the potential beacon lasting longer than a month was questionable.

We theorised that using multiple (towards 4) could allow for sophisticated tracking via signal strength and related Bluetooth receiver. It would in theory be optimal for primitive tracking.

ZigBee

At this stage we had familiarised ourselves with another 2.4GHz radio protocol known as ZigBee. ZigBee utilises an incredibly low powered and cost efficient solution often seen in smart home devices (such as light bulbs, smart switches etc.)

However to utilise this correctly we would have to incorporate our own beacon of choice to ping it's location and potentially sync multiple beacons together if we planned on serious coordination narrowing. Feasibly while the idea was efficient, the implementation and existing timeframe was not.

RFID

Radio Frequency Identification, a solution utilising magnetism rather than radio waves and frequency was a choice we considered. Often as consumers we see industrial implementations in shops to deter thieves with tags applied to expensive goods for trade. The premise was that by applying a tag to a chicken we could utilise hotspots to ping their location back and forth. While we could not guarantee the precise location, we could estimate the nearest last known position that the chicken pinged from, giving a rough idea of position.

Conclusion - BLE

ZigBee required too much time and precision to build, design and introduce. RFID was expensive and not efficient enough for our desires, it would suffer based on distance therefore impeding the design we inferred.

Owing to this we selected BLE and purchased 5 ESP32 development boards. These boards are generic ESP32 DOIT DEV v1 Kits clones from Amazon.

 

doitesp32

Source: https://docs.zerynth.com/latest/_images/doitesp32.jpg

 

We selected these boards as they provided easy to access interfaces, expansive developer pinouts and were far cheaper than alternative solutions we researched.

 

Iteration 2 - Using 4 Boards

The initial plan was to utilise 4 boards to perform primitive location tracking based on signal strength, however this would require coordination and synchronisation; else falling out of sync and providing inaccurate data.

However we soon encountered a problem. Not only would synchronisation cause concern but linking and maintaining 4 receivers as well as providing accurate measurements was beyond a difficult task. Bluetooth signal strength was not nearly accurate enough, especially with competing bands on the same frequency. We placed this design on the back of our burn pile in order to focus on other modules as we assumed without consistent location accuracy, the purpose would fall short of anything useful.

 

Iteration 3 - Final Solution

At a later date, we once again confirmed our state of the system to the client; we mentioned the Chicken locator. The client to our surprise noted that just knowing whether a chicken was within 100m of the coop would be more than adequate. As it transpires chickens are likely to roost in the same location each night with their known flock regardless of their distance travelled that day. Therefore knowing whether the chicken has met a predator can be known if the tracker reports its locale.

As such we utilised a single ESP32 board with an LED and wrote primitive code to handle the basic proof of concept. Owing to time constraints we could not risk incorporating it within our main system and opted to keep it standalone, hence using an LED in representation.

A callback is provided within the code on the ESP32 that verifies the beacon identification.

/*  Callback handler class, required for type cast */
class ChickenCallback: public BLEAdvertisedDeviceCallbacks
{
    void onResult(BLEAdvertisedDevice advertisedDevice)
    {
        // Check whether the address of the identified device matches the chicken beacon
        if(BLEAddress(id).equals(advertisedDevice.getAddress()))
        {
          digitalWrite(23, HIGH);
          chicken = 2;  
        }
    }
};

As seen, while primitive it works as intended.

Known Issues

It's plausible that a chicken could suffer an illness, lose its tracker or somehow lose its claw. If any of these states occurred then we would suffer false positives endlessly with a beacon lost in the underbrush constantly reporting it's within 100m.

Following such, we cannot accurately identify the precise range of the chicken or whether its signal is being blocked by brick of thick brush.

The battery of the beacon is unknown, whether it would withstand a long period of time, harsh weather or even continue to operate correctly is entirely down the COTS beacon we purchased. Had we developed our own then several additions could be made.

Future work

We would adhere to implementing a visual representation on the main interface for the current status of chickens, potentially providing a heat map, GPS Google maps API and even historical data from the database. This could be incredibly useful tracking the behaviour of chickens, how they escape a pen or where they utilise space, etc. This requirement would absolutely need network access.

If we could not use accurate position tracking, we could use multiple beacons to identify where a chicken is closest. This could be useful if the client has large land mass which extends beyond the range of beacons, or alternatively if signals are blocked.

 

Control Hub

Premise

The main premise behind the control hub is to have a device in which coordinates the network. We decided during the design phase that Instead of building automated behavior into each of the different modules, such that they could operate completely isolated from one another, we would use a single device in which controlled their behavior through sending requests. On the surface this may sound like a worse solution than isolating each, however, we have a few reasons to justify this design choice.

The first reason as to why we have opted to build the system in this manner was that it allows us to slim down the other modules themselves to run on less powerful hardware. Most of the other devices connected via the CAN bus do not require any other form of network interface, this means we can use relatively inexpensive microcontrollers, like the ATmega328P, connected to a singular CAN bus interface (MCP2515) and the required sensors. If we opted for each device to operate in an isolated capacity they would all require access to the external database and as such an interface to communicate with this would be required.

The next reason for our choice was that it makes adding a user programmable layer onto the network a much simpler process. Code or settings could be submitted to this main control hub and this would in turn make the external requests to the devices within the modular network.

The last big reason as to why we chose to include a main control hub is that it provides a singular point of entry to the entire network of devices. This should make it easier to provide a way for external services, like the web application, to communicate with the system.

 

Final Iteration Overview

The final iteration of the control hub was implemented on a Raspberry Pi Model 3 B+, ideally though we wanted to use an Odroid C2. Having purchased one we were hesitant to connect a MCP2515 CAN interface due to it potentially putting an overvoltage of 5v into the Odroid's 3.3v rated GPIO pins. We faced the same issue with the Raspberry Pi but as the board was less than half the price we were more willing to take the risk. We developed small logic level converting circuits to step the 5v signal down to 3.3v and this is present on the final result. A picture of the main hub and the logic level converter can be seen below.

 

Main control hub

Logic level converter

 

As for functionality the main control hub orchestrates the devices on the network, it solicits updates from the sensors along the CAN bus and can issue requests to actionable devices like the automated door. Upon receipt of these updates from devices it adds this information to the external database. The hub listens to three different channels: its subscribed to the water sensor topic via MQTT, the door, air sensor and egg identifier over CAN bus, and lastly a REST server using TCP.

The REST server provides authenticated endpoints that allow outside services to communicate with the main hub. These endpoints include functionality for forcing devices to update their readings, retrieving the latest reading stored on the hub, seeing the system logs, and more

The last major feature of the main control hub is a programmable language, this language allows the end user to write programs for the system dictating its behaviour. Actions can be set to occur on specific events, at a given time of day, or if a sensor reading is greater than, less than or equal to a specific value. To learn more about this language a section of this corpus has been devoted to it, click here to go there.

 

Iteration 1 (Research)

For the main hub, as the scope of what the device needs to do is quite well defined, we will conduct research into the best programming language suited to developing it and what single-board computer will be best tasked for the job.

 

Single-board Computer Choice

For the main control hub we will look into three popular single-board computers as to decide on the best for the task. The three boards that will compare for their suitability are as follows:

 

Raspberry Pi 3 Model B+

One of the original single-board computers commercially available, this model of the Raspberry Pi boasts 1GB of RAM alongside a 4-core ARM Cortex-A53 clocked at 1.2GHz. Storage on the device is provided by a microSD card slot, and a standard GPIO 40-pin header alongside two USB 2.0 are available for connectivity. The device costs roughly ~£30 at the time of writing.

 

Odroid C2

The Odroid C2 has an 4-core Amlogic ARM Cortex-A53 clocked at 1.5Ghz, alongside a Mali-450 GPU and 2GB of system memory. Storage on the device can be provided by either a microSD card or an external eMMC flash storage module. As for connectivity the the Odroid has an IR receiver, a standard 40-pin GPIO header and 4-USB connections all operating on a unique bus. With shipping and tax included the rough price of an Odroid C2 is ~£65 at the time of writing.

 

Asus SBC Tinkerboard 2GB

The most expensive on our list is the premium Asus SBC Tinkerboard. This device comes with a 4-core ARM Cortex-A17 clocked at 1.8GHz, a Mali-T760 GPU, 2 GB of system memory and 16GB of on-board eMMC system storage. The design of the board closely resembles the initial Raspberry Pi and the set of available connections match; a 40-pin GPIO header and two USB 2.0 are available. The cheapest version found of this board came in at approximately ~£85-90.

 

Conclusions

Comparing the three boards it was apparent that the Tinkerboard was the most powerful. The 1.8GHz processor is 50% faster than that on the Raspberry Pi and 20% faster than that of the Odroid; though the price is paid for this speed. In terms of system memory the Odroid and the Tinkerboard both have 2GB of DDR3 memory available compared to the Raspberry Pi's 1GB. Alongside this they also both support eMMC storage unlike the Raspberry Pi, though 16GB comes default with the Tinkerboard.

All of the devices compared have a good variety of connections available, all supporting the standard 40-pin GPIO header, however the Odroid takes the lead due to its 4-USB ports that all operate on their own bus.

The Tinkerboard and Odroid offer a GPU, though due to the nature of the device being built this should not provide much of a benefit. The device will operate for the most part in a headless manner.

In conclusion we have decided that the Odroid C2 offers the best trade-off between high specifications and price. Though the Tinkerboard was hands down the best of the three, we could not justify spending an extra £30 to get a slightly faster CPU clock speed and a better GPU that will be rarely utilized.

 

Odroid C2

 

Programming Language Choice

The next decision we need to make is to decide on the language to use to program the main control hub. The three major languages being considered are:

Other languages may have been more suitable candidates, however due to being a two person development team and having only one primary developer working on the main hub we have opted to look into languages that we have some basic level of proficiency in. Within our language of choice we are searching for good support of asynchronous programming, a selection of useful libraries to help interface with given services, and good community support and documentation.

 

Python

Having picked up in popularity in the recent years due to the high level of use within the data science field, Python is a very respected language with a large community of developers. For this reason, Python has an abundance of high quality libraries for a host of tasks ranging from CAN bus interfacing, to lightweight REST server implementations. A caveat of using Python is the poor support for asynchronous code; threads are hard to manage and require a lot of architectural overhead and as such the go to solution is the Asyncio library. This library allows the programmer to write concurrent code as a set of co-routines in which are scheduled on a main event loop.

 

Java

Java has been a staple programming language for a long time now, and there is reason for this. It uses the respected objected orientated paradigm and provides a static type system that provides certain assurances to the robustness of given code. Being one of the most widely adopted programming languages, Java has a host of libraries available to perform a variety of tasks. As for asynchronous code support, Java provides a nice interface for thread management and libraries like EA Async offer a more JavaScript like promise structure.

 

Erlang

A functional language originally developed for telecoms systems, Erlang has started to become more prevalent as a general purpose programming language as of late. Erlang is inherently concurrent and can support thousands of lightweight spawned processes, each running in isolation with their own memory pool. Unfortunately as Erlang has not hit the mainstream heights of the other two languages it lacks library support for certain tasks, and the development community is somewhat of an in crowd with new information often shared via the Erlang mailing lists.

 

Conclusions

Considering our small development team we have opted to use Python as the language of choice for the main control hub. The reason behind picking Python is that we believe it will allow us to be able to develop more code within the fixed time frame we have. This is due to the dynamic type system provided, and the library support available for CAN bus, REST web servers, etc.

Though not the best choice for asynchronous code we feel that the asyncio library, available in Python, is feature complete enough that it should not hinder our ability to complete the tasks we have set out to do.

 

Iteration 2 (Module Architecture)

Now that we have decided on the programming language and hardware that we will use to construct the main hub, we should now look into how we are going to structure the module and the features that it should implement.

To communicate with the other sensors in the network the main hub is going to require two interfaces, one to listen to the CAN bus and another MQTT. The CAN interface will allow the hub to communicate with the automated door, air condition sensors, and egg identification module. The MQTT interface on the other hand allows communication from the drinker level monitor. The data received from these external devices within the modular network will need to be stored in long term store, as such a service for database access will be also be required.

Next we need to consider how data is going to be received or explicitly sent to this device, as such a standard TCP/IP socket interface should allow for inbound or outbound communication. This socket should be authenticated to ensure nothing is triggered via malicious actors.

The last consideration is that of the programmable language, the language should support controlling the automated door and sending notifications. It should be implemented as a separate module outside of the main hub that is able to request the execution of hub actions. This should simplify the integration of these two components as the hub should have no dependency on the language controller.

 

From the above set of requirements we can build a model of the main control hub that looks somewhat similar to that expressed below.

Hub module design

 

The main controller has access to set of services and interfaces, these are responsible for the inbound and outbound data across the system boundary. The language controller exists as a separate entity but is able to communicate with the hub controller.

 

Iteration 3 (System Logger)

As we have now created a design for the main hub we can start looking into implementing given features. The first feature we will implement is a basic system logger so we can see the run-time execution. This logger should allow processes to be tracked through the application, and aid in the debugging later implemented features; especially due to the asynchronous nature of the code base.

 

Within our implementation of this feature we used Pythons standard library logger and adapted it to our needs. The package supports defining custom loggers and their corresponding output streams, logs were setup to output to standard out and a run-time file. The source code for our logger can be found here.

Initially an instance of the logger class was instantiated per module, however this resulted in multiple log files being generated. For this reason the logger was converted to a singleton pattern, singletons within Python however are somewhat convoluted.

As can be seen in the below code when instantiating an object the class dunder-function (double underscore function) __new__() is called first followed by the instance function __init__(). We can utilize this to implement singletons. By creating a class variable and setting its value to null, _singleton = None, we are able to check this on initial object creation and create the singleton if it is not defined. If we create a new object, before returning, we set the class variable _singleton to this. On all subsequent instantiations of objects of this type, that already created object, will now be returned. This is the premise behind the logger within our system.

# Singleton design pattern within Python.

_singleton = None
def __new__(cls, *args, **kwargs):
    if not cls._singleton:
        # ...
        cls._singleton = super(Logger, cls).__new__(cls, *args, **kwargs)
        
    return cls._singleton


def __init__(self):
    self.logger = logging.getLogger('coop_logger')
    

 

The last notable feature of our logger implementation is the option to output the caller of the function that resulted in the log entry being created. This is done via inspecting the call stack and traversing up it a fixed amount until the caller is retrieved. The implementation of this can be seen below.

# Function for retrieving calling module and function name.

def __getCaller(self):
    """Climb the execution stack to find the calling module.
    This naive implementation climbs the stack a fixed depth of four calls"""

    current_frame = inspect.currentframe()
    caller_frame = inspect.getouterframes(current_frame, 1)
    module_path = caller_frame[3][1]
    function_name = caller_frame[3][3]
    module_name = os.path.relpath(module_path, os.getcwd())
    return [module_name, function_name]

This code climbs the call-stack for four frames due to there being four hops from the calling function that caused the log entry to be added, and the function that actually adds the log entry. The output from our log looks as follows:

10:14:35 ~ 03-23-19: INFO: canbus_interface.py_listen: Message received on CAN Interface: [arbitration_id: 0x0, dlc: 0x3, data: [16, 1, 1]]

10:14:35 ~ 03-23-19: INFO: devices.door.py_handle_data_resp: Handling AUTOMATED DOOR DATA message from CAN.

10:14:35 ~ 03-23-19: INFO: devices.door.py_handle_data_resp: The door reports as being in state: closed.

10:14:35 ~ 03-23-19: INFO: devices.door.py_add_record: Adding door status reading to the database.

10:14:35 ~ 03-23-19: INFO: database_conn.py_insert: Successfully added record to the database.

10:14:44 ~ 03-23-19: INFO: hub_controller.py_stop: Stopping application..

Hub system log example

 

Iteration 4 (CAN Interface)

The next feature to be implemented is the CAN interface, this Python module should allow communication between the devices on the CAN and the main hub. It should support two way communication such that the devices are able to send messages to the hub unsolicited and vice versa. This code should be non-blocking and accept incoming messages at any given point.

 

To do this we have introduced a new module canbus_interface.py that has two internal classes, CANInterface and PacketBuilder. Packet builder implements the builder design pattern and is used to construct well-formed CAN packets that abide our internal packet structure; more about this can be seen, here in the networking section of this corpus. The CAN interface provides functionality to: subscribe devices to the CAN bus, listen and route received messages, and send CAN packets.

Initially we implemented both of these classes using the Python library, python-can. This library allowed a simple byte array to be generated and sent to the network by calling the bus.Send() method. This made integration quite easy, however, we noticed that whilst using this, some malformed packets were being put on the CAN bus. Attempting to debug this we were unable to grasp exactly what was causing this issue. Using the CAN inspecting tools available in Linux, namely candump from the can-utils package, we were able to see that the packet that was about to be transmitted onto the bus was already malformed and as such must be happening within our code. For this reason we decided to look into switching out python-can to native style sockets to see if this remedied the issue, low and behold it worked.

This was done via researching the specific frame format used for CAN packets, "=IB3x8s". This can be read as follows:

Now that we knew this, the Packet Builder and the send and receive functions within the CAN Interface were refactored to use sockets instead of the python-can wrapper. The information about the the frame format was found here within the Python docs. Some snippets of the socket implementation can be seen below:

 

# CAN Interface socket listener loop.

async def listen(self):
"""Start an asynchronous listener attached to the CAN bus waiting for messages"""

    while True:
        if self.shutdown_flag:
            break
        if select.select([self.socket], [], [], 0.0)[0]:
            # ...
        await asyncio.sleep(0)

Points to note are that when querying the interface using select.select( ... ) the time out is set to zero seconds and the first queued message is taken. Once this received message is handled the thread is yielded to other waiting tasks to ensure that this listener does not stall the asyncio event loop, this is done by calling the function await asyncio.sleep(0). Once control returns to this method the queue is inspected for the next waiting message and the cycle continues. If no message is available the thread is immediately yielded.

 

# Construct method of the CAN packet builder class.

async def construct(self):
   """Construct and return the assembled CAN message.
   An error is thrown if not all required fields are populated."""

   try:
        # Generate the CAN packet.
        # self.frame_fmt = "=IB3x8s"
        host_byte   = ((self.source << 4) | self.destination)
        data        = util.list_flatten([host_byte, self.type_code, self.payload])
        arb_id      = 0x000
        data_bytes  = bytearray(data)
        dlc         = len(data_bytes)
        raw_bytes   = bytes(data_bytes.ljust(8, b'\x00'))
        packet      = struct.pack(self.frame_fmt, arb_id, dlc, raw_bytes)

		# ...

        return packet

   except Exception as e:
            self.log.error("Failed to encode CAN packet, error: {0}".format(e))
            return None

Within this snippet, if we ignore our application specific packet structure, the base CAN packet format can be seen. The variables arb_id and dlc make the initial padded three bytes, and raw_bytes the eight byte char array.

 

Iteration 5 (MQTT Interface)

We next looked at the MQTT interface. This module should allow communication from subscribed MQTT devices to be received. It does not need to support two-way communication therefore packet creation is not required, and as with the CAN interface, this code should be non-blocking and accept incoming messages at any given point.

 

In comparison to the CAN Interface this device was quite simple to implement, we simply had to re-purpose a lot of the same code. Using the hbmqtt python library it was a simple case of just redesigning the subscribe, listen and route functions such that they work with MQTT instead of CAN. The subscribe method still simply allows devices to subscribe to receive message for a given address, and route forwards these message to them. The only method where there is a little difference is the listen function, though the overall premise is still the same.

 

# Listen function for the MQTT interface.

async def listen(self):
    """Start an asynchronous MQTT subscriber waiting for messages."""

    topic_ids = [("{}{}".format(self.mqtt_prefix, dev.topic_id), 0) for dev in self.subscribed_devices]
    await self.client.connect(self.mqtt_addr)
    await self.client.subscribe(topic_ids)

    while True:
        if self.shutdown_flag:
            await self.client.unsubscribe(["zigbee2mqtt/+"])
            break
        try:
            message = await self.client.deliver_message(timeout = 0)
            self.loop.create_task(self.route_mqtt(message))
        except asyncio.TimeoutError:
            pass
        except ClientException as e:
            self.log.error("Error when receving MQTT packet: {}".format(e))

        await asyncio.sleep(0)
        

As can be seen a similar infinite loop setup is present along with the thread yield at the end of each iteration. Using the hbmqtt library the code checks to see if a message is waiting to be received and if so it is routed, otherwise a TimeoutError is thrown and the code passes this.

 

Iteration 6 (Network Devices)

As the hub is the aggregator of all the data from the other devices in the network, it needs a means of being able to comprehend and understand what it receives from them. Each device encodes data differently and has a different set of actions available to it. For this reason each device will require some form of handler on the control hub.

 

As of writing, our system has two different types of devices within its network, MQTT devices and CAN devices. The CAN devices support two-way communication, that is they can be queried and requested to perform specific actions, whereas the MQTT devices publish information but do not have any capability for handling inbound data.

In the initial iteration of the control hub, code existed within the main module, hub_controller.py, to deal with all different devices. This quickly became unmanageable and code duplication started to become rampant. We decided to instead implement both device types as an abstract base class and then created specific sub-classes for each device within the network.

Door, dust, humidity, temperature, pressure, and egg identifier were all children of the abstract class CANDevice. The water sensor was a child of the abstract class MQTTDevice.

 

Devices directory structure

Device modules from source code

 

CAN Device

The CANDevice abstract class provided concrete implementations for:

Three abstract methods were also included that had to be implemented by the concrete children:

 

MQTT Device

The MQTTDevice abstract class was a little less feature complete as it could not communicate with the device. As such the abstract class provides concrete implementations for:

Two abstract methods are included that need to be implemented by the concrete children:

 

Each device was created as an instance of its specific class and then was subscribed to the corresponding interface it required. This occurs in the main controller and can be seen below.

# Demonstration of device creation and subscription to the respective interfaces.

def __init__(self, can_type, can_channel):
    # ...
    
    # Set CAN devices
    self.door 			= Door(hub = self, interface = self.can_interface)
    self.egg_id 		= EggIdentifier(hub = self, interface = self.can_interface)
    self.temperature 	= Temperature(hub = self, interface = self.can_interface)
    self.dust 			= Dust(hub = self, interface = self.can_interface)
    self.humidity 		= Humidity(hub = self, interface = self.can_interface)
    self.pressure 		= Pressure(hub = self, interface = self.can_interface)
    
    # Set MQTT devices.
    self.water_sensor 	= WaterSensor(hub = self, interface = self.mqtt_interface)

def start(self):
	# ...
    
    # Subscribe CAN devices.
    self.loop.create_task(self.can_interface.subscribe(self.door))
    self.loop.create_task(self.can_interface.subscribe(self.temperature))
    self.loop.create_task(self.can_interface.subscribe(self.dust))
    self.loop.create_task(self.can_interface.subscribe(self.humidity))
    self.loop.create_task(self.can_interface.subscribe(self.pressure))
    self.loop.create_task(self.can_interface.subscribe(self.egg_id))

    # Subscribe ZigbeeMQTT devices.
    self.loop.create_task(self.mqtt_interface.subscribe(self.water_sensor))

With this implemented, on receipt of a message from a given interface the packet will attempt to be routed to the corresponding subscribed device. If the message is successfully routed to the device it will be deconstructed, processed and added to the database.

 

The last feature to mention is that some of the CAN devices have device specific actionable functionality implemented. For instance the automated door has functions for sending open and close door packets. These functions simply construct a packet of the correct form and transmit it across the network.

Here is the implementation of the open door function for the automated door device.

# Automated door open door function.    
    
async def open_door(self):
    """Send open door packet via CAN"""

    # Packet data
    source = 0x00
    destination = self.can_address
    payload = [0x02]

    # Build the open door command packet.
    msg = ci.PacketBuilder()
    msg.set_source(source)
    msg.set_destination(destination)
    msg.set_type(ci.PacketBuilder.COMMAND)
    msg.set_payload(payload)
    packet = await msg.construct()

    # send via the CAN interface and wait a given timeout for a response
    # to be received, return 'timed_out' on failure, result on success.
    await self.interface.send_packet(packet)
    done, pending = await asyncio.wait(
        [self.fetch_event_resp(), util.async_timeout(5.0, None)],
        return_when = asyncio.FIRST_COMPLETED
    )
    for task in pending:
        task.cancel()
    for task in done:
        result = "timed_out" if task.result() == None else task.result()
     return result

As can be seen the packet is constructed and then transmitted over the given interface. The confusing code at the end is simply a routine for setting up a timeout. Once the message has been sent the function will wait for up to five seconds for a response. This wait is non-blocking.

 

Iteration 7 (Database Service)

We have already stated the main control hub needs to act as an aggregator of data from the network. One aspect of this role is to store the data received within a database. As such a service is going to need to be created that allows us to do this. From the previous iteration based on the device types we have already concluded that each device will have the specifics of adding a record to the database, so we simply need to create an interface that they can utilise.

 

The development of this service was quite simple due to the fact that we only needed to create an interface for adding records and reading records. When instantiated this module will connect to the database and then provide the required functionality.

 

The database is connected to when the object is instantiated.

# Connection on instantiation.

def __init__(self):
    self.log = Logger()
    self.conn = pg.connect(
        host        = os.getenv('DB_HOST'),
        database    = os.getenv('DB_NAME'),
        user        = os.getenv('DB_USER'),
        password    = os.getenv('DB_PASS')
    )
    

 

The insert function simply takes a formatted query and the tuple of parameters to bind.

# Insert a record into the database.

async def insert(self, query, parameter_tuple):
    """Execute an insert query with the given parameters on the database"""

    try:
        cursor = self.conn.cursor()
        cursor.execute(query, parameter_tuple)
        self.conn.commit()
        cursor.close()
        self.log.info("Successfully added record to the database.")
        success = True
    except Exception as e:
        self.log.error("Unexpected database error whilst creating record: {}".format(e))
        success = False
    return success

 

The read functionality takes a query and an optional tuple of parameters to bind and returns a list containing the rows of the result.

# Read from the database.

async def read(self, query, parameter_tuple = ()):
    """Perform a read operation on the database with the given query and parameters"""
    
    try:
        cursor = self.conn.cursor()
        cursor.execute(query, parameter_tuple)
        self.conn.commit()
        rows = cursor.fetchall()
    except Exception as e:
        self.log.error("Error occurred whilst reading from database: {}".format(e))
        rows = None
    finally:
        cursor.close()
    return rows

 

Iteration 8 (REST Server)

The next consideration we need to make is how the web application is going to communicate with the hub, and by proxy the greater network. To make the integration of the web application as simple as possible we opted to provide an authenticated REST API to the hub. This should allow the web application to simply query the hub using standard web requests in whatever language is used for its development.

 

For implementing this we considered a few different libraries that provide support for developing REST API. We looked into Flask-RESTful and aioHTTP. Flask is one of the more popular web frameworks available for Python and is quite widely used, alternatively aioHTTP is less known but has out of the box support for asyncio. Having looked into both, they shared all the features required to build a REST server so we opted to go with Flask as it had the greater community.

Our first implementation launched the Flask web server at the same time as the hub, however, due to the blocking nature of the server this halted the main hubs execution; this was because they were using the same instance of the Python interpreter. We quickly refactored this to have the main control hub spawn the web server as a separate process. This fixed the issue of the hub being halted, but did require some extra architecture added to the system. To allow the devices to communicate as separate processes an interface for sockets needed to be added. This interface listens on a given port and handles TCP/IP requests. When an authenticated request is received from the web server, the hub handles it and returns a response.

Unfortunately we quickly found ourselves facing a new problem, when a request was being handled by the Flask server we were unable to make an asynchronous call to the hub, wait on a response from the network, and then respond the waiting client; Flask simply returned None as it did not have support for co-routines. Below is a snippet of this web server.

# Flask REST API.

class DoorAPI(Resource):

    def get(self, command):
        if command == "open":
            hub_cmd = {'command': "door_open"}

        elif command == "close":
            hub_cmd = {'command': "door_close"}

        _ipc = IPclient(hub_cmd)
        return hub_cmd


app = Flask(__name__)
api = Api(app)
api.add_resource(DoorAPI, '/door/<string:command>')
app.run(host = "0.0.0.0", port = 8000)

As can be seen from the above code, the REST server was quite naive at this point. There was no authentication and only one endpoint existed for testing. As we were unable to await a response from the main hub the command was sent in a fire a forget fashion, _ipc = IPClient(hub_cmd). The command itself was just returned to the client.

We looked into multiple ways to get around this, however, we decided it would be quicker to create a demo using aioHTTP to see if this framework with default asyncio support would solve our issue. Luckily for us the aioHTTP web server worked. This was because each route handler for the web server was already a co-routine, therefore we were able to use await syntax immediately without issue.

 

As we now have a working system using aioHTTP, we started to add the required functionality. We added middleware to support basic authentication and error handling and the set of required endpoints.

The authentication middleware handles unauthorised requests returning them a 401 page.

async def basic_auth_middleware(app, handler):
    """Middleware to perform password authentication to the REST service"""

    async def middleware(request):
        token = request.headers.get('authorization', None)
        api_auth = os.getenv('REST_AUTHKEY')

        if token != api_auth:
            return web.Response(text = "401: Unauthorized", status = 401)
        elif token == api_auth:
            return await handler(request)

    return middleware

The error handling middleware deals with common issues, keeping the routes maintainable. On an endpoint error 404 is returned, 522 on a hub connection issue, 503 on a communication issue, and 500 on all other generic server errors.

async def error_handler_middleware(app, handler):
    """Middleware to handle errors thrown by handlers."""

    async def middleware(request):
        try:
            return await handler(request)

        # Query parameter error.
        except KeyError:
            return web.Response(text = "404: Endpoint Not Found", status = 404)

        # Socket error.
        except OSError:
            if errno.ECONNREFUSED:
                return web.Response(text = "522: Hub Connection Timed Out", status = 522)
            elif errno.ECOMM:
                return web.Response(text = "503: Service Unavailable", status = 503)

        # Uncaught server exception.
        except Exception as e:
            print(e)
            return web.Response(text = "500: Internal Server Error", status = 500)

    return middleware

 

Endpoint routes are split per device and they are aimed to be as simple as possible. Here is an example of a couple of the available routes pertaining to the automated door.

# Automated door main hub REST endpoints.

async def door_close(request):
    """Handle a request to the /door/open endpoint"""
    command = {'device': 'door', 'action': 'close'}
    response = await hub.request(command)
    return web.json_response(text = json.dumps(response), status = 200)

async def door_update(request):
    """Handle a request to the /door/update endpoint"""
    command = {'device': 'door', 'action': 'update'}
    response = await hub.request(command)
    return web.json_response(text = json.dumps(response), status = 200)

As can be seen from the above code we are able to use the await syntax with aioHTTP and get a response from the hub, response = await hub.request(command).

 

Iteration 9 (Main Controller)

Having developed most of the services and interfaces required for the hub to operate we need to write the main controller that instantiates and connects all the components.

 

The logic for this component is quite simple: instantiate all the required devices, load the set of interfaces, subscribe the devices to their respective interfaces and schedule some the required automated behaviour. On instantiation the main controller creates an object for each of the aforementioned and when the start function is run the application begins to listen on the interfaces. A function is present for scheduling a periodic heartbeat to all the CAN devices as well as requesting a periodic status update.

 

Iteration 10 (Language & Actions)

The final consideration for the main hub was that of the domain specific language, this should allow the user to customise their system to their needs. In order to keep the language as separate from the main hub as possible all the actions have been implemented hub side and all the language shall do is generate a list of actions that need to occur.

 

Due to the complexity of explaining the language it has been dedicated its own section within this corpus. This section will instead only explain how the actions are implemented hub side. To visit the implementation of the language please click here.

Derived from the language section the feature set of actions that need to be implemented are:

 

An action for controlling the door has already been implemented within the device itself, so we will only need to implement the notification based actions, sending SMS and email. For this two new services have been created, SMSInterface and EmailInterface. These services simply provide a builder class to abstract the required formatting for creating an email or SMS message, and then a send function to post it.

Below is the implementation of the send_email() function, a similar design is followed for send_sms().

# Send email function from EmailInterface class.

async def send_email(self, email_message):
    """Sends an Email using the environment credentials"""
    try:
        # Decompose the email into parts.
        to      = email_message['to']
        from_   = email_message['from_']
        body    = email_message['body']

        mail_server = smtplib.SMTP(self.smtp_addr, self.smtp_port)
        mail_server.ehlo()
        mail_server.starttls()
        mail_server.login(self.email_addr, self.email_pass)
        mail_server.sendmail(from_, to, body)
        mail_server.close()

    except KeyError as e:
        self.log.error("Malformed email message received: {0}".format(email_message))
    except Exception as e:
        self.log.error("Failed to transmit message to STMP server,error: {0}".format(e))
        

 

Known Issues

There is currently only one known issue with the main hub.

 

Future Work

There is a host of future work that could be done to improve the main hub.

 

System Database

Research

Our project needed a database to store the sensor data returned from the modular network. This data is then displayed in a graph format on the web interface.

We looked into a few different types of databases available to use. However, as we use SQL queries, any reasonably SQL compliant relational database management system (RDBMS) will suffice. We opted to use PostgreSQL for our project as it was available to us at the University, however this can easily be switched out with MySQL or the like.

Structure

We designed the structure of our database with efficiency in mind. As we knew we were going to have one sensor of each type, we had one table per sensor. Whilst this is not optimal, future work needs to be done to rectify this.

 

The tables of our database are found below:

system
id SERIAL NOT NULL UNIQUE {PK}
lat DECIMAL NOT NULL
long DECIMAL NOT NULL
close_time INTEGER
open_time INTEGER
first_time_setup BOOLEAN NOT NULL
hub_url VARCHAR(128) NOT NULL

The system table is used to store information pertaining to the system as a whole. There is only ever one row which is updated with the relevant values to change.

 

user_account
username VARCHAR(128) NOT NULL UNIQUE {PK}
name VARCHAR(256) NOT NULL
password CHAR(64) NOT NULL
salt CHAR(64) NOT NULL
admin BOOLEAN NOT NULL
verified BOOLEAN NOT NULL
door_control BOOLEAN NOT NULL
default_pic BOOLEAN NOT NULL

The user_account contains information pertaining to each individual user. It stores information such as the hashed password alongside if the user has permission to control the door via the web interface.

 

user_email
username VARCHAR(128) NOT NULL {PK, FK}
email VARCHAR(256) NOT NULL {PK}

This table stores user emails, with the username being the foreign key referencing the username in the user_account table.

 

source_code
id SERIAL NOT NULL {PK}
name VARCHAR(128) NOT NULL
code VARCHAR NOT NULL
date_created TIMESTAMP
last_modified TIMESTAMP
is_active BOOLEAN NOT NULL

The above table stores data pertaining to the user programmed code snippets. There is only ever one row which has the is_active flag set to true.

 

temperature_readings
reading_id SERIAL NOT NULL {PK}
date TIMESTAMP
value DECIMAL

Basic table to store information sent from the temperature sensor

 

dust_readings
reading_id SERIAL NOT NULL {PK}
date TIMESTAMP
value DECIMAL

Basic table to store information sent from the dust sensor

 

pressure_readings
reading_id SERIAL NOT NULL {PK}
date TIMESTAMP
value DECIMAL

Basic table to store information sent from the pressure sensor

 

water_readings
reading_id SERIAL NOT NULL {PK}
date TIMESTAMP
value BOOLEAN

Basic table to store information sent from the water sensor. Value contains true if there is water, and false if there is not.

 

humidity_readings
reading_id SERIAL NOT NULL {PK}
date TIMESTAMP NOT NULL
value DECIMAL NOT NULL

Basic table to store information sent from the humidity sensor

 

door_readings
reading_id SERIAL NOT NULL {PK}
date TIMESTAMP NOT NULL
value VARCHAR(128) NOT NULL
initiator VARCHAR(128) NOT NULL

Basic table to store information sent from the door. Values can be either "open", "close" or "error".

 

egg_detection_readings
reading_id SERIAL NOT NULL {PK}
date TIMESTAMP NOT NULL
value BOOLEAN NOT NULL

Table to store information pertaining to egg presence. Value will be true if an egg is detected.

 

Future Work

Our current database design is not extensible - the user will not be able to add another temperature sensor for example without creating another table. Therefore the design will have to be restructured to allow for this. For example, the design could be something similar to the following:

 

sensors
sensor_id SERIAL NOT NULL {PK}
sensor_name VARCHAR(128) NOT NULL
sensor_type VARCHAR(128) NOT NULL

 

Tables for different types of sensors can then be created with reference to the exact sensor_id in the sensor table:

temperature_readings
reading_id SERIAL NOT NULL {PK}
date TIMESTAMP
value DECIMAL
sensor_id {FK}

 

Queries such as

SELECT * FROM temperature_readings WHERE sensor_id = $1

can then be used to target a specific temperature sensor.

 

Networking

Premise

Building a modular network of sensors, it is a given that we will require some form of networking. At minimum, a system that aggregates data from multiple isolated sources would require a connection from those source to the aggregator; this being said we cannot understate the importance of networking within our project.

Having opted to go for an architecture using a central control hub we will need connections from each device to this. Within the system there will be devices inside the coop that are in close proximity to the main hub as well those outside the coop. The devices that are going to be inside may be wired, whereas those outside cannot be.

Alongside the requirements for the sensor network we also need to consider how we are going to interface with this data such that we can display it to the end user.

 

Final Iteration Overview

The solution we implemented for our final product can be split into two components, module networking and general networking.

For the module networking, all the devices based inside the coop were connected via a Control Area Network (CAN) Bus. This bus allowed these devices to send their obtained readings to the hub for aggregation and database entry. The devices based outside the coop, namely the drinker level monitor, were connected to a ZigBee based Personal Area Network (PAN). The coordinator of this PAN was a Raspberry Pi Zero with a connected CC2531 USB Dongle.

Due to the commercial off-the-shelf (COTS) nature of the water sensor we used, the Xiaomi Aqara, we were required to use an external open source library to deconstruct its ZigBee packets. This library outputted this data via MQTT, and as such the hub subscribed to this broker. Ideally we would have avoided the need for this middleman, however, without sinking resource into either building a bespoke water sensor or reverse engineering the Zigbee2MQTT library this did not seem feasible.

For the module networking we chose this combination of technologies for a few reasons, abstractly those reasons were cost and reliability. For a full description of our decision please refer to the conclusion of the next section, Module Networking.

Being the means of traversing the system boundary, the main hub requires a connection to a local Internet connected LAN, however, as does the Egg Identification module. Ideally this would have only been the main hub but due to processing power constraints the egg identification module had to be split into a client-server model. Had time permitted we would have converted the egg identification client into a simple device that took a picture and transmitted this to the main hub over the CAN bus, the hub would have then called out to the external image identification service itself; doing this would have returned the hub to being the only device requiring Internet connectivity.

As for general networking, the hub connects to an external database in which it records the information received from the devices within the network. This database is then accessed via the REST service held by the web application, and then this is utilized to display the contents.

 

Networking Requirements

As our system can be divided into two parts, the sensor network and the visualization, we are going to need a means of passing data between the two. For this reason at least one of the devices within the sensor network is going to need to have external communication such that it can interact with the other components. In an attempt to keep costs down we have chosen to use a single device, the main control hub, as the only device that can cross the sensor network system boundary, that is it can access the internet. This device will be responsible for storing all the data in the external database and providing the set of other services required by the web application.

Below is a slighty abstracted diagram of our envisaged architecture. The sensor network will be a self enclosed system that communicates to the outside world via the main control hub. This control hub will interface with the back end of the web server along with the external database.

 

Network Structure

 

As the general networking required to communicate between hosted databases, REST servers and web applications is quite well defined, that will be omitted from this document. Instead the scope will focus on the communication between the sensors within the sensor network, alongside a brief description of the external communication interface provided by the hub to the other services, namely the web applications REST server.

Module Networking

As we need to connect the devices of the coop in some manner, we will contrast and compare a variety of different communication protocols to use within our project. We will split these protocols into two categories, wired and wireless.

 

The first step was to decide on the set of protocols to compare. As can be seen below we have selected some common choices and we will compare them for speed, reliability and cost. To conclude we will decide on the protocols that we are going to use within this project and state why we believe they are the best options.

 

For wired communication protocols we will be looking at:

For wireless communication protocols we will be looking at:

 

USB Protocol

An asynchronous serial protocol, the USB standard was developed to set a common interface for peripherals. USB 2.0 provides a master-slave connection between two machines, supports a throughput of up to 480Mbps, and can connect a host to up to 127 slave devices (28 - 1). The master-slave premise dictates that a slave device can only transmit data back to the host should it state its ready to receive. The protocol for USB is rather complex and has a lot of overhead, but comes with robust error detection and a guaranteed quality of service. As for pricing, a caveat of USB is that should the product be sold, a licensing fee may have to be paid. Alongside this, if a custom sensor is built, the simplest solution to adding USB support would be via an interface board like the CH340 (~£3-5). This board then in turn would simply communicate with the rest of the system via UART or similar.

 

I2C Protocol

I2C is a serial multi-master & multi-slave protocol that allows communication over a two-wire bus. This bus uses resistive pull-up, that is the line is held high, and this limits the potential speeds capable; I2C is able to communicate up to rates of 1Mbps. As for addressing, support is provided for 7-bit and 10-bit addresses; this allows up to just over 1000 devices to be uniquely identified. Querying is done via transmitting the slaves address and the read flag, followed by the internal address of the register being read, the slave will then respond with the contents. As for writing to the slave, the slaves address is sent along with the write flag, followed by the internal register address and then finally the data. I2C interface boards can be purchased incredibly cheaply, at around ~£1.

 

SPI Protocol

Serial Peripheral Interface (SPI) is another master slave protocol. It supports a single master and an infinite number of slaves; though this will be capped by the limited number of pins available on the master. Within this setup the master is responsible for generating the clock signal along with initiating all data transfer. As SPI uses a separate connection for both the slave input and master input, data can be sent simultaneously, it is full-duplex. No error detection or recovery is present within this protocol and transfer rates of 10-20Mbps can be reached. Looking into the cost, most microcontroller come with SPI capabilities therefore outside of a few wires there is no extra cost.

 

CAN Protocol

Unlike the other protocols looked into so far CAN bus does not follow the master slave architecture; all devices are able to communicate freely on the bus without solicitation from another node. The CAN bus can transmit data at 125kbps at 500m, 1Mbps at 40m and 15Mbps at 10m. Unlike the other protocols CAN has built in mechanisms for bus recovery on errors as well as priority messaging. The first feature of error recovery is if a transmitter experiences a fault and the bus is moved to a non-functional dominant state, the dominant timeout functionality will soon move it back to recessive. The second feature is that a CRC is transmitted with each frame, if a received frame does not match the CRC it is disregarded by all nodes and an error frame can be transmitted along the bus. As for the priority messaging each frame is transmitted with an arbitration ID, a 12-bit value that represent the priority of the message. If two nodes communicate the message of higher arbitration takes precedence and the other is made to wait. As for cost, each CAN bus interface is connected along the same two wire bus via a CAN interface, these interface cost around £1.75 each. If the device being connected to the module is only 3.3v tolerant on it's pins a logic level converter is required, these come in at another ~£1.

 

Wi-Fi (IEEE 802.11n) Protocol

Wi-Fi is the standard wireless transmission protocol used in most homes today. It has a high throughput reaching up to 450-600Mbps for 802.11n, and can operate in both the 2.4Ghz and 5Gz bands. As this wireless standard was developed for computing devices within the home and business environments it did not focus on low power, this being said it may not be suitable for IOT devices that require their battery to last some extended period of time. The monetary cost of adding Wi-Fi capabilities to a sensor is within the region of ~£2 for a transceiver.

 

BLE Protocol

Bluetooth Low Energy operates on a PAN network and is an energy efficient radio communication protocol which can be widely found in mobile phones. Android, OS X, Windows Phone, iOS, Linux and Windows all support this protocol allowing easy integration into these systems. Having a typical operating range of 2-5m, this technology has a upper transmit rate of 1Mbps. A device using this protocol can either be a peripheral device, a slave role, or take the master role; this master role can support and initiate multiple connections. Alongside this, a device can either be an observer or a broadcaster. A central device (master), will initiate a connection with a peripheral, transmit data, and both can then go back to sleep, conserving power. A BLE transceiver is around ~£4.

 

ZigBee Protocol

ZigBee is a wireless technology that was purpose built for low power Personal Area Networks (PAN). The main use of this technology is for low data transfer, energy efficient, secure networking. Commonly ZigBee nodes are wireless and sleep for the majority of the time and upon waking quickly transmit data before going back to sleep. Devices implementing the ZigBee protocol often have a range of around 10 to 30 meters, require a line of sight, and can transmit data at up to 0.25Mbps. Commonly devices implementing this protocol will be battery powered, although most modules can be wired in. The security in ZigBee usually refers to the fact that symmetric encryption is used between the communicating nodes. The cost of an Xbee, a device implementing the ZigBee protocol, is around ~£20.

 

Conclusion

For the devices within the coop we have opted to go with a Control Area Network (CAN) bus configuration. For price as well as reliability we were certain that we wanted to use a wired protocol and as such we considered USB, SPI, I2C and CAN. The process we used to decide the protocol was a simple recursive ruling out. The first protocol we crossed off was I2C, this was due to rigid read/write structure that it imposed. Only being able to read directly from registries within the other device did not seem like an ideal solution for our project. The next protocol we ruled out was USB, this choice was made because going with this option seemed like it would introduce a lot of unnecessary overhead to our communications. The last one we ruled out was SPI, and this one was a bit of a hard choice, on one hand we knew a lot more about this protocol and a lot more information was available online, but on the flip side it was a master slave setup without error detection or recovery. In the end we decided to go with the CAN bus protocol and face the learning curve. The priority messaging alongside the error detection and recovery were too valuable to turn down, especially at the cheap cost of CAN bus implementation.

As for the wireless communication for the devices outside the coop we elected either ZigBee or BLE as the protocol we will employ. We ruled out Wi-Fi pretty quickly due to the power consumption that the devices using it would have required, especially as they would most likely be remote and battery powered. When comparing and contrasting ZigBee to BLE we came to the conclusion that both covered the needs of our system, and as such have decided to use whichever is easier to implement when required.

 

ZigBee Networking

We decided that our remote water sensor which monitors water level will need to operate on a wireless network. We opted to use ZigBee for this over BLE as ZigBee offers longer range devices and lower power consumption as we did not know where this water drinker will be. However, due to time constraints, we used a COTS solution, a Xiaomi Aqara water sensor.

We were pleasantly surprised that this module uses ZigBee to communicate with Xiaomi's propriety bridge, however, time needed to be spent deconstructing this ZigBee packet to read the information contained inside. Our project supervisor advised us of a Node library called zigbee2mqtt which deconstructs the ZigBee packet and publishes it to an MQTT broker. More information about the water sensor can be found here.

Due to time constraints, we opted to use this library, however, for future work, we will opt to build our own sensor as there are limitations with the Xiaomi sensor. Firstly, it is only IP67 rated, meaning the sensor can only be submerged for 30 minutes in water with a depth of 1m. In a real life situation, the water sensor could be submerged for a whole day or two, therefore we would need to design a solution which accommodates for this. For this solution, we would use ZigBee.

 

CAN Bus Networking

Implementing CAN bus

Implementing CAN bus within our system proved to be a bigger challenge than first anticipated. Using example code from GitHub we were able to get successful communication between two Arduinos, but adding a Raspberry Pi to the network proved painful.

We managed to find a guide online that showed how to connect an MCP2515 CAN interface to a Raspberry Pi and to our surprise it involved cutting a track within the PCB and soldering a wire to the board. Reading into the problem this was because the MCP2515 CAN transceiver requires 5v power to transmit, the receiver only requires 3.3v. With the Arduino's we were able to plug the board directly into the CAN interface as the GPIO pins are 5v tolerant, this is not the case on the Pi. For this reason we had to look into how we could step down the voltage before it reached the Pi. We experimented with building some simple voltage dividers but to no avail, so instead we looked into logic level converters. These circuits allowed us to successfully step down the voltage such that it was safe to connect the Pi to the MCP2515, though this was not the end of our problems.

 

Logic level converter

Logic Level Converter Connected to MCP2515 Interface

 

Once this was done and we had successfully wired the two devices, the next problem to solve was how to setup the CAN interface within Linux. Reading around we found that the Linux Kernel currently contains an implementation of CANopen and as such it should have just been a job of configuring this. Setting the required parameters and building a simple demo program within Python, we managed to get a few messages passing over the network before the line went silent. This would be the same situation for the next four days and around 30 hours of our lives.

Attempting to debug this we installed the can-utils tools and monitored the data being received on the bus. As before we saw the short burst of success, then silence. We decided to circumvent Python in case it was one of the libraries causing the issue, we simply sent raw CAN messages via the command line. This appeared to have a higher success rate. With this in mind we built a second demonstration program using Python, but instead of using the python-can library we opted for native sockets. Again a few successful packets then nothing. After hours of looking into different tools, statistics and page 10 Google results we were about to cut our losses and jump over to our backup plan of ZigBee or SPI. In a last ditch effort at around 2am we decided to sit down with a drink and read the 94-page MCP2515 data sheet, we came across the max SPI speed supported by the MCP2515, 10MHz. It turns out the for the Raspberry Pi Model 3 B+ it defaults above this at 15.6Mhz, this is not the case on the older models. Fixing this solved all the issues with CAN bus and it turns out it was actually the SPI controller silently crashing, not the bus at all. The witnessed behavior was all coincidence.

 

Packet Structure

As we decided to use CAN bus as our main communication protocol for the devices in the coop we built a packet structure around the singular frame size. We could have performed buffering at the receiver but for the data we were required to send this seemed unnecessary. A payload of a standard CAN bus frame is 8 bytes in size and thus this was the space we had to work with.

We split the data available to us into three top level sections, a host byte defining sender and receiver, a type byte stating the message type, and a six byte application specific payload.

 

Outer frame

 

 

Host Byte

The host byte of the packet was split into two components, 4-bits for the source address and 4-bits for receiver address, the structure of this can be seen below. This allowed us to have up to 16 addressable devices and for our purposes this is plenty enough.

 

Host byte

 

 

Packet Type Byte

The next part of the packet we looked into was the types of message that we needed to be able to support. Using a bit flag approach we could cover 8 different packet types within our byte.

We decided on the following set of types:

Packet Type Explanation
Data Response A response containing device status data from client to hub.
Command A command invoking an action from the hub to a client.
Update Request A request from the hub to client asking it to update its information.
Heartbeat Response A response to a heartbeat request from client to hub.
Heartbeat Request A request from the hub to client asking it to acknowledge its alive.
Status Request A request from the hub to client requesting device status.
Command Acknowledgement An acknowledgement of a command from client to hub.
Reserved -

 

This structure can be seen in diagrammatic form below.

Packet types

 

 

Status Request

The next part we looked into was the structure of the status requests against our system. As each device is only in one given state at a time we opted to zero out the payload of this packet. A status request sent from hub to device simply requires correct addressing in the host byte and a status packet type, 0b00100000.

 

Status request

 

 

Heartbeat Request

Next was heartbeat request, another packet type sent from hub to device, this request has a type of 0b00010000 and a randomly defined payload. The sent payload should be returned in the subsequent heartbeat response.

 

Heartbeat request

 

 

Heartbeat Response

The answer to a heartbeat request, the heartbeat response shares a similar structure. When a device receives a heartbeat request it strips the payload and returns this to the main control hub in a heartbeat response packet, the type of this packet is 0b00001000.

 

Heartbeat response

 

 

Update Time (Hub to Automated Door)

An time update command from the hub to the door. As can be seen the packet type is 0b00000100. The first byte of the payload is used to specify the type of the update, this is done using bit flags. The subsequent bytes are used for the data. This given update is for time, therefore bytes 2, 3, 4, 5 of the packet contain a UNIX timestamp.

 

Update to door

 

 

Command (Hub to Automated Door)

This packet specifies an update command from the hub to the automated door, as can be seen the packet type is command, 0b00000010. The payload of this packet uses bitflags within the first byte to specify whether the door should open (0b10) or close (0b00).

 

Command to door

 

 

Data Response (Temperature to Hub)

This packet shows the format of a data response from the temperature sensor to the hub. This reading is split across four bytes with the first two being a 16-bit integral value and the lower 16-bit being the fractional value. For example, the temperature 20.25°C would be expressed as the following 4 bytes: 0x00140019.

 

Temperature data

 

 

Data Response (Humidity to Hub)

This packet shows the format of a data response from the humidity sensor to the hub. This reading is split across four bytes with the first two being a 16-bit integral value and the lower 16-bit being the fractional value.

 

Humidity data

 

 

Data Response (Pressure to Hub)

This packet shows the format of a data response from the pressure sensor to the hub. This reading is split across two bytes that are combined to make a singular 16-bit integral value.

 

Pressure data

 

 

Data Response (Dust to Hub)

This packet shows the format of a data response from the dust sensor to the hub. This reading is split across two bytes that are combined to make a singular 16-bit integral value.

 

Dust data

 

 

Data Response (Door to Hub)

This packet shows the format of a data response from the automated door to the hub. This reading uses bit flags within the first byte of the payload to express the doors current state, as well as the action that caused it to enter that state. The actions include Automated behavior and physical button press.

 

Door data

 

 

Data Response (Egg Identifier to Hub)

The egg identification unit uses bit flags within the first byte, though only one state exists. If the first byte is set to 0b00000001 an egg has been detected within the coop, otherwise nothing has.

 

Egg identifier data

 

 

Command Acknowledgement (Door to Hub)

This last example packet shows an acknowledgement from the door to hub. This packet has the type 0b01000000 and states the action the door is taking alongside the event that caused it to occur.

 

Door event ack

 

 

Hub Interface

As the main control hub of the system acts as the bridge between the sensor network and the external web application, it needs to provide an interface that allows communication. As not to add a lot of implementation specific logic to the web application we have decided that the hub will host it's on local REST server. The back-end of the web application will simply be able to ping these authenticated endpoints using its native web requests and this should allow it to communicate with the sensor network. A diagram of this architecture can be seen below.

 

Hub REST architecture

 

To learn more about the implementation of the hubs REST API click here to visit the main control hub section of this corpus, alternatively to learn more about the web application REST API click here.

 

 

Known Issues

There are a few known issues with our current networking that should we have the chance to look into we would like to fix.

 

Future Work

As for future work we have a few suggestions as to how to improve the system.

 

 

Domain Specific Language

Premise

Having spoken to multiple smallholders it was apparent that though they shared some common problems or faced similar hardships they all had their own fixed routine to overseeing their coop. To create our system such that we do not dictate the end users process of managing their chickens Dan Knox suggested that we look into building a simplistic language that can be used to programmatically control the system; he mentioned looking into ladder logic.

 

Final Iteration Overview

The final implemented solution was a departure from ladder logic and instead opted to use a custom domain specific language (DSL) to provide the user greater control over how the system operates. A DSL was chosen as it allowed the language to express more complex functionality outside of the standard 'if this then that' capabilities offered by ladder logic.

The event driven language produced includes three top level constructs, 'ON', 'AT' and 'EVERY'; these top level events dictate when the code inside is executed. The body of an 'ON' code block is executed when a given event occurs, an 'AT' code block when a specific time of day is reached, and an 'EVERY' code block at a fixed period.

The body of these top level constructs can contain two types of children, conditionals and actions. Conditionals are standard style 'if' statements, for example, if the temperature in the coop is greater than this threshold then do this. Actions on the other hand are outputs from the system, these include opening and closing the coop door or sending custom SMS or email notifications.

 

Iteration 1 (Research)

Ladder Logic

Ladder logic is a programming language that is built on the notion of relays and switches. The inputs of the system are usually derived from the state of the devices and dependent on their configuration, N outputs are generated. At present ladder logic has evolved into a language that is used commonly to diagrammatically describe a program based on device state, and is often used as a basis to writing the code for a programmable logic controller (PLC). A simplistic example of ladder logic can be seen below.

 

Simple ladder logic example

http://www.edgefx.in/wp-content/uploads/2014/08/24.jpg

 

The example above shows the premise of ladder logic, the left rail represents live and the right rail negative. Each rung represents a route to an output, if switch one or two are activated, the lamp is on.

Within our system using the notion of ladder logic we could build a program that dependent on the state of given sensors and devices, performs some output. An example phrase in this ladder logic based language could be, 'If the door is open and the time is after 18:00 then close the door'. Within this example there would be a single rung with two switches, one representing door state and the other current time, the output of this rung would be closing the coop door.

 

Strengths of ladder logic approach

 

Limitations of ladder logic approach

 

Domain Specific Language (DSL)

Another approach to solving this problem would be to produce a domain specific language (DSL), this solution tends closer towards the standard general purpose programming languages we see today. This solution would allow constructs to be added to the language outside of the simple states present in ladder logic and as such would be easier to implement more complex functions, like loops, parameters or variables.

This solution warrants a lot more development overhead but would allow the language to be more expressive, simpler to read large projects and scale better.

 

The development of this kind of solution would include:

  1. Deciding on the constructs and features within the language
  2. Designing a grammar that implements these features and can produce all valid programs
  3. Creating the set of valid tokens present within the system
  4. Developing a lexical analyzer to parse the input program into tokens (tokenizer)
  5. Developing an interpreter to check semantic correctness and build the internal representation
  6. Developing a system for executing the internal representation of the program

 

Chosen method

Though the simplicity of implementing the ladder logic approach is much greater than that of the DSL the extra capabilities afforded by the DSL outweighs this benefit. Having the ability to parse arbitrary self defined syntax provides much greater expressiveness, and should the situation arise that we need to expand the language, by choosing a DSL, we will not be put in the position that the language is unable to support the new construct. For these reasons we have chosen to implement a DSL.

 

Iteration 2 (Language Design)

Features

Now the architecture behind the language has been decided the next step is to define the functionality that the language will implement. Once this has been decided it should help to derive the best syntax and semantics to express it.

 

Client requested features

 

Additional desired features

 

Language Syntax

Overview

This part of the document will describe the process for deriving the syntax of the language. The goal of this section is to decide on the valid constructs in the language, and define the tokens and ordering required to build these.

 

Constructs

For the initial iteration the language will be quite lightweight, as per the list of previously defined features three top level constructs should be present: on a given event occurring, at a specific time of day and every fixed period.

 

Aiming to replicate standard English, to improve readability, these top-level constructs will read as follows:

On event: ON 'event' DO 'body'

At time: AT 'time' DO 'body'

Each period: EVERY 'time' DO 'body'

 

Now that we have defined top level constructs we can see that we need to be able to express events, time, and body constructs. The body of a piece of code should be able to perform two operations, execute conditionals and execute actions.

 

Events: Events will be atomic values within the language, similar to atoms in Erlang.

Time: For the ease of parsing time will be represented as a 24-hour clock (00:00).

Body: The body of a construct should be built of a set of actions and conditionals.

Conditional: IF 'condition' THEN 'body'

Actions: Actions will be defined commands within the language.

Condition: 'reading' 'operator' 'value'

Reading: A reading will be the status of a specific device within the modular network.

Operator: Standard set of comparison operators.

Value: A numerical or string value.

 

Defining Events

Events within the language will be derived from those raised by the devices within the modular network, the list is as follows:

 

Defining Readings

Readings within the language will be derived from the status of given devices within the modular network, the list is as follows:

 

Defining Actions

Actions within the language will be derived from the actionable features of the devices within the modular network alongside those defined by the client, sending SMS and sending email. The list is as follows:

 

Language Grammar

Now that the set of requirements and the generalised structure of the language have been defined we can produce a grammar to ensure that the language is able to parse the set of programs intended.

The following context-free grammar (CFG) is the result of that process. The start state is the non-terminal <program>.

It should be noted that due to difficulties in parsing constructs within the language, that is ON, AT, EVERY and IF, they have all had an ending tag added to their definiton, this can be seen in the grammar below.

 

Production Rules Output
<string> \"[^"]*\"
<number> [0-9]+.?([0-9]{1,2})?
<time> [0-9]{2} ':' [0-9]{2}
<phone_number> \"+?[0-9]{11-13}\"
<email> \"[A-Za-z0-9!#$%&'*+-/=?^_]\"
<operator> '<' |
'>' |
'<=' |
'>=' |
'='
<action> 'open_door' |
'close_door' |
<send_sms> |
<send_email>
<send_sms> 'send_sms(' <phone_number> ',' <string> ')'
<send_email> 'send_email(' <email> ',' <string> ',' <string> ')'
<actions> <action> |
<action> <actions>
<event> 'door_opening' |
'door_closing' |
'door_error' |
'egg_detected' |
'drinker_empty'
<reading> 'door_status' |
'coop_humidity' |
'coop_pressure' |
'coop_dust' |
'drinker_level'
<condition> <reading> <operator> <string>
<reading> <operator> <number>
<conditions> <condition> |
<condition> 'and' <conditions> |
<condition> 'or' <conditions>
<conditional> 'if' <conditions> 'then' <body> 'end_if'
<body> <actions> <body> |
<conditional> <body> |
ε
<command> 'on' <event> 'do' <body> 'end_on' |
'at' <time> 'do' <body> 'end_at' |
'every' <time> 'do' <body> 'end_every'
<commands> <command> |
<command> <commands>
<program> <commands> |
ε

N.B: Terminal characters are denoted by single quotes and non-terminals by angled bracket notation.

 

Grammar Derivations

Now the language has been defined a few example derivations will demonstrate what can be expressed.

 

Derivation #1

This first program describes a piece of code that executes on the event 'door_opening'. When executed this code sends an SMS notification to the mobile number '07715967677' that reads 'Opened'.

# Derivation of program that executes on door opening and sends SMS notification.

<program> 
	--> <commands>
	
	--> <command>
	
	--> 'ON' <event> 'DO' <body> 'END_ON'
	
	--> 'ON' 'door_opening' 'DO' <body> 'END_ON'
	
	--> 'ON' 'door_opening' 'DO' <actions> <body> 'END_ON'
	
	--> 'ON' 'door_opening' 'DO' <actions> 'END_ON'
	
	--> 'ON' 'door_opening' 'DO' <action> 'END_ON'
	
	--> 'ON' 'door_opening' 'DO' <send_sms> 'END_ON'
	
	--> 'ON' 'door_opening' 'DO' 'send_sms(' <phone_number> ',' <string> ')' 'END_ON'
	
	--> 'ON' 'door_opening' 'DO' 'send_sms(' '"07715967677"' ',' <string> ')' 'END_ON'
	
	--> 'ON' 'door_opening' 'DO'
			'send_sms(' '"07715967677"' ',' '"Opened."' ')'
        'END_ON'
	

 

Derivation #1 Tree

Simple CFG derivation tree

 

Derivation #2

This second example demonstrates a program that executes every thirty minutes. The code checks the current status of the 'coop_temperature' and sends a warning email to 'ds576@kent.ac.uk' reading 'Hot!' if the temperature at that time exceeds '35.0' degrees C.

# Derivation of program that executes every half hour and sends a warning email if coop
# temperature gets too high.

<program>
	--> <commands>
	
	--> <command>
	
	--> 'EVERY' <time> 'DO' <body> 'END_EVERY'
	
	--> 'EVERY' '00:30' 'DO' <body> 'END_EVERY'
	
	--> 'EVERY' '00:30' 'DO' <conditional> <body> 'END_EVERY'
	
	--> 'EVERY' '00:30' 'DO' <conditional> 'END_EVERY'
	
	--> 'EVERY' '00:30' 'DO' 'IF' <conditions> 'THEN' <body> 'END_IF' 'END_EVERY'
	
	--> 'EVERY' '00:30' 'DO' 'IF' <condition> 'THEN' <body> 'END_IF' 'END_EVERY'
	
	--> 'EVERY' '00:30' 'DO' 'IF' <temp_condition> 'THEN' <body> 'END_IF' 'END_EVERY'
	
	--> 'EVERY' '00:30' 'DO'
			'IF' 'coop_temperature' '>' <temp_value> 'THEN'
				<body> 
			'END_IF'
		'END_EVERY'
		
    --> 'EVERY' '00:30' 'DO'
			'IF' 'coop_temperature' '>' 35.0 'THEN'
				<body> 
			'END_IF'
		'END_EVERY'
		
	--> 'EVERY' '00:30' 'DO'
			'IF' 'coop_temperature' '>' 35.0 'THEN'
				<actions> <body>
			'END_IF'
		'END_EVERY'
		
	--> 'EVERY' '00:30' 'DO'
			'IF' 'coop_temperature' '>' 35.0 'THEN'
				<actions> 
			'END_IF'
		'END_EVERY'
		
	--> 'EVERY' '00:30' 'DO'
			'IF' 'coop_temperature' '>' 35.0 'THEN'
				<action> 
			'END_IF'
		'END_EVERY'
		
	--> 'EVERY' '00:30' 'DO'
			'IF' 'coop_temperature' '>' 35.0 'THEN'
				<send_email> 
			'END_IF'
		'END_EVERY'
		
	--> 'EVERY' '00:30' 'DO'
			'IF' 'coop_temperature' '>' 35.0 'THEN'
				'send_email(' <email> ',' <string> ',' <string> ')'
			'END_IF'
		'END_EVERY'
		
	--> 'EVERY' '00:30' 'DO'
			'IF' 'coop_temperature' '>' 35.0 'THEN'
				'send_email(' '"ds576@kent.ac.uk"' ',' <string> ',' <string> ')'
			'END_IF'
		'END_EVERY'
		
	--> 'EVERY' '00:30' 'DO'
			'IF' 'coop_temperature' '>' 35.0 'THEN'
				'send_email(' '"ds576@kent.ac.uk"' ',' '"Subject."' ',' <string> ')'
			'END_IF'
		'END_EVERY'
		
	--> 'EVERY' '00:30' 'DO'
			'IF' 'coop_temperature' '>' 35.0 'THEN'
				'send_email(' '"ds576@kent.ac.uk"' ',' '"Subject."' ',' '"Hot!"' ')'
			'END_IF'
		'END_EVERY'
	

 

Derivation #2 Tree

Complex CFG derivation tree

 

Iteration 3 (Implementation)

How to Implement

Having produced the grammar of the language to be implemented, the next step was to look into how this could be implemented in Python, and how it will integrate with the larger system.

 

As mentioned prior implementing a language can be broken down into a few required steps.

  1. Defining a valid set of tokens from the grammar
  2. Building a lexical analyzer to parse the source code into these tokens
  3. Defining an internal representation of the language in Python
  4. Building an interpreter to convert this token stream into this internal representation
  5. Building an environment or architecture to run the internal representation

 

Defining Tokens

Within a language a token is considered to be a single atomic element; they are often designed as a key value pair where the value may be null. Every valid input within a program must be able to be broken down into it a set of constituent tokens and this process must be deterministic.

 

The common school of thought behind the design of standard tokens within a general purpose programming language is that their are five required token types, a sixth if you include comments. These tokens are as follows:

 

 

Applying these types of tokens to our grammar we produce the following set:

 

Identifier Tokens

Due to the simplicity of this language no user defined identifiers are necessary.

 

Keyword Tokens

Construct tokens

Name Source lexeme Value
AND AND Ø
OR OR Ø
IF IF Ø
THEN THEN Ø
END_IF END_IF Ø
ON ON Ø
END_ON END_ON Ø
AT AT Ø
END_AT END_AT Ø
EVERY EVERY Ø
END_EVERY END_EVERY Ø
EOF -1 Ø
LBRACKET ( Ø
RBRACKET ) Ø
QUOTATION " Ø
COMMA , Ø

N.B. The character 'Ø' denotes that a value is not required.

 

Readings Tokens

Name Source lexeme Value
READING door_status
coop_temperature
coop_humidity
coop_pressure
coop_dust
drinker_level
Source

 

Event Tokens

Name Source lexeme Value
EVENT door_opening
door_closing
door_error
egg_detected
drinker_empty
Source

 

Action Tokens

Name Source lexeme Value
ACTION send_email
send_sms
close_door
open_door
Source

 

Separator

Unique separators for language constructs have been implemented as keywords within the grammar; this makes reading nested constructs easier, but in turn, makes the language syntax more strict and increases the amount that is required to be remembered by the end user.

 

Operator

Name Source lexeme Value
OPERATOR <
>
<=
>=
=
Source

 

Literal

Name Source lexeme Value
EMAIL_ADDRESS String matching the regular expression "[A-Za-z0-9!#$%&'*+-/=?^_]" Source
PHONE_NUMBER String matching the regular expression "+?[0-9]{11-13}" Source
TIME String matching the regular expression [0-9]{2} ':' [0-9]{2} Source
NUMBER String matching the regular expression [0-9]+.?([0-9]{1,2})? Source
VALUE door_open
door_closed
door_error
door_in_transition
Source
STRING String matching the regular expression \"[^"]*\" Source

 

Comment

Comments have been implemented as the # character and function in the same manner as in Shell or Python. A comment character will cause the subsequent characters within the line of text to be ignored.

 

Token Representation & Parsing

Now that a set of defined tokens has been decided, the next step is to create the token representation within Python and write a lexical analyzer (parser) for it. This lexical analyzer will convert a stream of source code into the corresponding set of tokens.

 

As stated before the structure of a token is often simply a key value pair; below is the implementation used within this project.

def __init__(self, type, value, line, pos):
    # Token fields.
    self.type = type
    self.value = value
    self.line = line
    self.position = pos

N.B. Line number and position within the line have been added to aid with debugging erroneous source code.

 

The python class Lexer reads in the source code and converts this into an output stream of tokens. The main function within this class is the get_next_token(self) method, which, on each subsequent call, returns the next token that can be generated from the stream.

This works via reading characters from the input source and comparing against the rules for the given lexemes until a match is found. When a match is found a token object is created and this is returned.

 

The example below shows a token match for a comment and left bracket.

# Handle comment line.
if self.current_char == '#':
    while self.current_char != '\n':
        self.advance()
    self.skip_newline()
    continue

# Handle open bracket token.
if self.current_char == '(':
    token = Token(LBRACKET, '', self.line, self.line_pos)
    self.advance()
    return token

The case for handling comments ignores the subsequent characters in the source code until a newline character is reached. Upon hitting a newline this is skipped and then the lexer continues as usual. As for the left bracket, if this character is recognized within the stream, a token representing this is created and returned. The lexer method self.advance() simply moves the pointer for the current character to the next within the source.

 

Tokenization of Program

Using the program derived from the earlier example used within the grammar derivation, here is how the output token stream for that program would look.

 

Program source code:

# Program that executes every half hour sand send a warning email if coop
# temperature gets too high.

EVERY 00:30 DO
	IF coop_temperature > 35.0 THEN
		send_email("ds576@kent.ac.uk", "Subject", "Hot!")
	END_IF
END_EVERY

 

Lexical analyzer output:

{type: EVERY, value: None, line: 2, pos: 10}
{type: TIME, value: '00:30', line: 2, pos: 16}
{type: DO, value: None, line: 2, pos: 19}
{type: IF, value: None, line: 3, pos: 8}
{type: READING, value: 'coop_temperature', line: 3, pos: 25}
{type: OPERATOR, value: '>', line: 3, pos: 27}
{type: NUMBER, value: 35.0, line: 3, pos: 32}
{type: THEN, value: None, line: 3, pos: 37}
{type: ACTION, value: 'send_email', line: 4, pos: 17}
{type: LBRACKET, value: '(', line: 4, pos: 17}
{type: QUOTATION, value: '"', line: 4, pos: 18}
{type: EMAIL_ADDRESS, value: 'ds576@kent.ac.uk', line: 4, pos: 35}
{type: QUOTATION, value: '"', line: 4, pos: 35}
{type: COMMA, value: ',', line: 4, pos: 36}
{type: QUOTATION, value: '"', line: 4, pos: 38}
{type: STRING, value: 'Subject', line: 4, pos: 46}
{type: QUOTATION, value: '"', line: 4, pos: 46}
{type: COMMA, value: ',', line: 4, pos: 47}
{type: QUOTATION, value: '"', line: 4, pos: 49}
{type: STRING, value: 'Hot!', line: 4, pos: 54}
{type: QUOTATION, value: '"', line: 4, pos: 54}
{type: RBRACKET, value: ')', line: 4, pos: 55}
{type: END_IF, value: None, line: 5, pos: 12}
{type: END_EVERY, value: None, line: 6, pos: 14}
{type: EOF, value: None, line: 7, pos: 1}

N.B. The above output is the defined standard string representation of token objects.

 

Internal Representation

Now that we have formed a token stream from the elements that build the grammar, we now need to look into how we can implement the required functionality in Python. Within our system this is the purpose of the internal representation, a way to express our program as a structure of nested objects; these objects will represent the different constructs and data types within our language.

This solution was proposed by Scott Owens during a brief chat about how we could implement a domain specific language within Python. He mentioned that if you are looking to generate just an output set, it would be simple to implement each component of the language as an object. In this arrangement, if all constructs had an execute function, they could resolve to a set of potential actions. This set of actions could subsequently be returned to the caller and executed.

 

Internal representation dir structure

Internal representation modules from source code

Within our language the design of these objects are quite simple. Each type has fixed behavior and outside of conditional statements few have variable behavior.

 

Actions, Readings & Events

All actions within the system are expressed as a subclass of the abstract class ActionIR. Each child is a data structure for it's respective type and required parameters.

class ActionIR:
    """
    Abstract class for intermediate representation of system actions.
    @Author: Damon M. Sweeney
    @Version: 15-02-2019
    """
    def __init__(self, action):
        self.log = Logger()
        self.action_type = action

class OpenDoor(ActionIR):
    def __init__(self):
        super().__init__(Action.COOP_DOOR_OPEN)
        
class SendSMS(ActionIR):
    def __init__(self, number, message):
        super().__init__(Action.SEND_SMS)
        self.number = number
        self.message = message

As can be seen from the snippet above both of the concrete classes, OpenDoor and SendSMS, have a given Action Enum type, and SendSMS has additional parameters; these additional parameters will help when we later execute the action.

 

Similar to Actions, both Readings and Events are a similar setup. Each are a subclass of their abstract parent and both are merely data structures for wrapping their given Enum type.

class EventIR:
    """
    Abstract class for intermediate representation of system events.
    @Author: Damon M. Sweeney
    @Version: 15-02-2019
    """
    def __init__(self, event):
        self.log = Logger()
        self.event_type = event

class DoorOpening(EventIR):
    """Coop door opening event."""
    def __init__(self):
        super().__init__(Event.DOOR_OPENING)
class ReadingIR:
    """
    Abstract class for intermediate representation of system readings.
    @Author: Damon M. Sweeney
    @Version: 15-02-2019
    """
    def __init__(self, reading):
        self.log = Logger()
        self.reading_type = reading

class CoopTemperature(ReadingIR):
    """Coop temperature reading."""

    def __init__(self):
        super().__init__(Reading.COOP_TEMPERATURE_STATUS)

 

Code block (Abstract class)

Codeblock is the name given to the abstract parent class of all the constructs within the language. It has one abstract method exec(self, environment), this method should implement the specific behavior of the construct and return a result. It takes one parameter, an environment that houses a dictionary of all current readings of the devices within the modular network.

class Codeblock(object):
    """
    Abstract base class required for codeblock objects.
    @Author: Damon M. Sweeney
    @Version: 14-02-2019
    This class outlines the 'must have' functionality of any codeblock
    within the ChickenLang language.
    """

    @abc.abstractmethod
    def __init__(self):
        raise NotImplementedError

    @abc.abstractmethod
    def exec(self, environment):
        """Implements the functionality of the codeblock.
        Requires an 'environment' variable containing the current system state."""
        raise NotImplementedError

 

At, On & Every

At, On and Every are the top level constructs that make up a program within the language. At constructs execute at a specific time of the day, Every at a fixed period and On at the realization of a given event. Like before a simple data structure is used to represent each of these.

class At(Codeblock):
    """
    'At' top level event codeblock implementation.
    @Author: Damon M. Sweeney
    @Version: 14-02-2019
    This module implements the functionality of an 'At' event.
    These types of event occur at a specific time of the day.
    """

    def __init__(self, time, body):
        self.log = Logger()
        self.time = time
        self.body = body

    def exec(self, environment):
        """Executes the body of the 'at' codeblock.
        Triggering of this event is handled by the language controller."""

        self.log.debug("Executing 'At' codeblock: {0}.".format(self.body))
        actions = self.body.exec(environment)
        return actions
class On(Codeblock):
    """
    'On' top level event codeblock implementation.
    @Author: Damon M. Sweeney
    @Version: 14-02-2019
    This module implements the functionality of an 'On' event.
    This codeblock occurs when a specific event happens within the system.
    """

    def __init__(self, event, body):
        self.log = Logger()
        self.event = event
        self.body = body

    def exec(self, environment):
        """Executes the body of the 'on' codeblock.
        Triggering of this event is caused by a device event in the system."""

        self.log.debug("Executing 'On' codeblock: {0}.".format(self.body))
        actions = self.body.exec(environment)
        return actions
class Every(Codeblock):
    """
    'Every' top level event codeblock implementation.
    @Author: Damon M. Sweeney
    @Version: 14-02-2019
    This module implements the functionality of an 'Every' event.
    These types of event occur at a constant defined interval of time.
    """

    def __init__(self, time, body):
        self.log = Logger()
        self.time = time
        self.body = body

    def exec(self, environment):
        """Executes the body of the 'every' codeblock.
        Triggering of this event is handled by the language controller."""

        self.log.debug("Executing 'Every' codeblock: {0}.".format(self.body))
        actions = self.body.exec(environment)
        return actions

 

Body

The body class implements the inner contents of an On, At, Every or If construct. A Body can hold a list of children, and these children may be of type Action or Conditional. This allows arbitrarily nested conditions. The exec(self, environment) method of the Body is where the functionality lies.

When executed the list of stored constructs is executed and a list of outputs generated and returned.

def exec(self, environment):
    """Iterates over the list of encompassed code executing each step."""
    self.log.debug("Executing 'body' codeblock: {0}.".format(self.steps))

    # Iterate over body generating list of output actions.
    actions = []
    for codeblock in self.steps:
        # If an atomic action add to output.
        if issubclass(type(codeblock), ActionIR):
            actions.append(codeblock)

        # If not atomic execute codeblock.
        else:
            result = codeblock.exec(environment)
            actions.extend(result) if result != None else None

    # Return the output set of actions from the 'Body'.
    return actions

 

Conditional

A Conditional statement has a conditions object in which must be satisfied if the body is to be executed. A conditions object is constructed of multiple condition separated by AND or OR. Each of these condition is a comparison of a Reading against the value defined by the user. The Reading is replaced with the corresponding value from the environment.

In the current implementation multiple condition are applied from left to right. For example, the statement (True & False | False), is equivalent to ((True & False) | False).

class Conditional(Codeblock):
    """
    Language 'Conditional' implementation.
    @Author: Damon M. Sweeney
    @Version: 14-02-2019
    This class implements the functionality of a  conditional statement.
    """

    def __init__(self, conditions, body):
        self.log = Logger()
        self.conditions = conditions
        self.body = body

    def exec(self, environment):
        """Executes the conditional codeblock.
        If the conditions evaluate to true the body is executed."""
        if self.conditions.exec(environment):
            return self.body.exec(environment)
class Conditions(Codeblock):

	# ...

    def exec(self, environment):
        """Executes the set of conditions, resolving to a boolean True or False.
        Condition precedence is left to right."""

        for cond in self.conditions:
            if isinstance(cond, Condition):
                self.temp_condition = cond.exec(environment)
            elif cond == AND:
                self.temp_condition = self.temp_condition and self.conditions.pop(0).exec(environment)
            elif cond == OR:
                self.temp_condition = self.temp_condition or self.conditions.pop(0).exec(environment)

        return self.temp_condition
class Condition(Codeblock):

    # ...
    
    def __init__(self, reading, operator, value):
        self.log = Logger()
        self.reading = reading
        self.operator = operator
        self.value = value


    def exec(self, environment):
        """Executes the condition, resolving to a True or False."""

        # Obtain the reading from the execution environment and
        # derive result of conditional operation and return the value.
        reading_value = environment[self.reading.reading_type]
        try:
            if self.operator == '<':
                res = reading_value < self.value
            elif self.operator == '>':
                res = reading_value > self.value
            elif self.operator == '<=':
                res = reading_value <= self.value
            elif self.operator == '>=':
                res = reading_value >= self.value
            elif self.operator == '=':
                res = reading_value == self.value
            else:
                res = False
        except Exception as e:
            res = False

        return res

 

Interpreter

Now that we have produced an internal representation on how to structure the users program we need to convert the token stream produced by the lexical analyzer into this form. This is done by the Interpreter module.

The responsibility of this module is to read the consecutive stream of tokens and convert them into corresponding objects from the internal representation.

For example the following stream of tokens would be converted into an SendEmail action object.

# Put through the language interpreter returns object of type SendEmail.
# This object has three fields: email, subject, message.

{type: ACTION, value: 'send_email', line: 4, pos: 17}
{type: LBRACKET, value: '(', line: 4, pos: 17}
{type: QUOTATION, value: '"', line: 4, pos: 18}
{type: EMAIL_ADDRESS, value: 'ds576@kent.ac.uk', line: 4, pos: 35}
{type: QUOTATION, value: '"', line: 4, pos: 35}
{type: COMMA, value: ',', line: 4, pos: 36}
{type: QUOTATION, value: '"', line: 4, pos: 38}
{type: STRING, value: 'Subject', line: 4, pos: 46}
{type: QUOTATION, value: '"', line: 4, pos: 46}
{type: COMMA, value: ',', line: 4, pos: 47}
{type: QUOTATION, value: '"', line: 4, pos: 49}
{type: STRING, value: 'Hot!', line: 4, pos: 54}

 

Given a stream of tokens, calling load_program() on the Interpreter yields a return list of At, On and Every top level construct objects. These objects contain their body in which is recursively built, for this reason the output is built from bottom up.

The interpreter looks for a given set of tokens within the stream to build given constructs, after viewing a given token it is consumed the next is moved onto.

 

Below is the function for creating a SendSMS object from the stream, along with the implementation of the method consume(token_type).

# Create a SendSMS action object from stream.
def action_send_sms(self):
    # Check for parameter field opening
    self.consume(LBRACKET)

    # Obtain number parameter
    self.consume(QUOTATION)
    number = self.current_token
    self.consume(PHONE_NUMBER)
    self.consume(QUOTATION)
    self.consume(COMMA)

    # Obtain message paramter
    self.consume(QUOTATION)
    message = self.current_token
    self.consume(STRING)
    self.consume(QUOTATION)

    # Check for parameter field closing
    self.consume(RBRACKET)

    return SendSMS(number.getValue(), message.getValue())
# Compare current token with expected passed token type.
# If they match 'consume' the current token and assign the next token.
def consume(self, token_type):
    if self.current_token.type == token_type:
        self.current_token = self.lexer.get_next_token()
    else:
        self.error()

 

Once the entire tokens stream has been parsed into the internal representation and the list of top level constructs returned then the users program is ready to be setup for execution.

 

Internal Representation Execution

Now that we have generated the internal representation of the user program and can call exec on a given top level construct to get an output list of actions, we need to look into how we are going to schedule execution.

As said before On constructs execute on an event, At at a given time of the day, and Every at a fixed interval. We know that for On constructs the events within the modular network are going to be thrown from connected devices and as such we need a means of receiving these, alternatively for the At and Every construct we simply need to schedule the time at which they should execute.

 

Catching Events

As Events will be thrown from devices within the network these must be passed to the module handling the language, the language controller.

Upon receiving an event the language controller will look at the list of event triggered constructs and check for a match in event type. If the Event of the construct matches, then the body will be executed and a list of output actions generated and returned.

 

The code for executing the corresponding constructs can be seen below.

# Handles a received event and executes the corresponding code.

async def handle_event(self, event):
    """Notify the language controller of a sensor event."""

    self.log.info("Event raised, executing related 'On' codeblocks.")

    # Check event is a member of 'Event' enum.
    if not isinstance(event, Event):
        self.log.error("Event raised not recognised: {}".format(event))
        return

    # Iterate over event triggered events executing those required.
    for codeblock in self.on_trigger:
        if codeblock.event.event_type == event:
            self.loop.create_task(self.execute_codeblock(codeblock))

 

Scheduling Timed Execution

Timed events for the At and Every constructs have similar execution triggers; they both execute code dependent on time. Every constructs occur at a user defined period starting immediately, whereas At constructs happen from next occurrence of the given time, and then subsequently every twenty-four hours.

For this reason a timer class was created, TimeScheduler. This class allows these events to be scheduled and a callback to their execution set for when the timer elapses.

# The time scheduler class allows tasks to be scheduled daily or at a fixed interval.

class TimeScheduler:
    """
    Time Scheduler.
    @Author: Damon M. Sweeney
    @Version: 14-02-2019
    This module allows a module to register a callback at a specific time.
    """

    def __init__(self):
        # ...

    async def clear(self):
        # ...

    async def register_daily(self, time, callback, code_ir):
        # ...

    async def register_every(self, period, callback, code_ir):
		# ...
        

 

Execution Function

The final piece to creating the language of the system was to build a method that converted the arbitrary internal representation Action into the occurrence of actual output.

To do this a simple execute method was created that given a construct, generates its list of action outputs, and performs function calls to the corresponding functions present on the main controller of the hub.

The implementation of this function can be seen below.

# Execute a given construct and handle the set of returned actions.

async def execute_codeblock(self, codeblock):
    """Execute a given codeblock and perform the set of output actions."""

    environment = self.hub.reading_env
    for action in codeblock.exec(environment):

        if isinstance(action, OpenDoor):
            self.log.info("Executing programmed open coop door action.")
            self.loop.create_task(self.hub.action_open_door())

        elif isinstance(action, CloseDoor):
            self.log.info("Executing programmed close coop door action.")
            self.loop.create_task(self.hub.action_close_door())

        elif isinstance(action, SendSMS):
            self.log.info("Executing programmed send SMS action.")
            self.loop.create_task(
                self.hub.action_send_sms(action.number, action.message)
            )

        elif isinstance(action, SendEmail):
            self.log.info("Executing programmed send email action.")
            self.loop.create_task(
                self.hub.action_send_email(action.email, action.subject, action.msg)
            )

        else:
            self.log.error("Action not found: {}.".format(action))
            

 

Known Issues

Below are various issues that we have noticed with the current implementation of the domain specific language.

 

Precedence of Conditions

As it stands in the current implementation of conditionals the user is unable to dictate the precedence of different conditions due to the lack of bracketing support. Though with the current potential set of readings that can be compared this does not cause much loss to expressiveness if the language were to expand may become a bigger problem.

 

Refactor Validation from Interpreter to Readings

At present the validation for given conditional statements occurs within the Interpreter module. It would be ideal to refactor this into a separate module or the Readings subclasses themselves as to improve the ease of adding new device types to the language and to better encapsulate functionality.

 

Future Work

As with all products there is room to expand and advance, below are a few thoughts we had when it come to how we would progress the language further.

 

Update Implementation to Support Snippets

Though the source file of the code being executed may have as many components as the user sees necessary, a limitation of the current system is that only a single source file can be active at once. Ideally it would be nice to provide the user the capability to be able to hot swap individual segments of code in and out. This feature would allow the user user to generate a collection of individual snippets that all perform distinct tasks and then hop between those active within a few clicks on the website.

 

Simplify Separator Tokens

Another big improvement to the language would be to remove the explicit separator characters. As it stands each construct has an opening tag and a corresponding close of the form 'END_X'. Removing this would make the language much more readable and much closer to English language to write. As it stands we see two solutions to how this could be done. The first is the pythonic method of using indentation to dictate the level of nesting, the second solution the Java approach of having a single separator character like '}'.

 

Visualisation

Web Server

To provide a means for the user to access data provided by the modular network, we've created a web interface to display this. This interface not only shows the current data, but historical records alongside logs from the hub itself. This also allows verified users to send commands to the door module and provides role based access control to the system settings.

 

Research

After doing some initial research into web frameworks, we opted to use Angular on the client side, and Express.js on server side.

We also looked into full stack frameworks such as Laravel and ASP.NET and client-side frameworks such as ReactJS.

Angular, written using TypeScript, provides a lot of out of the box functionality, examples which include Ajax requests use the @angular/http library and routing use the @angular/router library whereas React has a lot more flexibility. When it was decided that we were going to incorporate Material Design into our user interface, it was found that both of these frameworks have access to a Material UI library.

Research was conducted into the ASP.NET MVC framework, a common framework developed by Microsoft. However, whilst this framework was powerful, it contains a large overhead if building small web applications.

Laravel is a framework written in PHP. From research, it was found that PHP has several problems with language design and core implementations with a handful of security issues. An example can be seen in this simple extract:

 

<?php
    if (strcmp($_POST['password'], 'sekret') == 0) {
        echo "Welcome, authorized user!\n";
    } else {
        echo "Go away, imposter.\n";
    }
?>

 

The above snippet is open to attack with the following commands:

 

$ curl -d password=sekret http://...
Welcome, authorized user!
$ curl -d password=wrong http://...
Go away, imposter.
$ curl -d password[]=wrong http://...
Welcome, authorized user!

 

This change was made to strcmp in PHP 5.3 and was not mentioned in any release logs. Source

 

The decision was made to use Angular as it provides a lot of out of the box functionality. As the work for this website was undertaken by one member of the group, there will be less overhead using out of the box functionality than researching more libraries to use. The member of the group undertaking this section, Joanna, also had past experience with this framework from her placement year.

Express.js was chosen as the server-side framework due to it being one of the most popular frameworks on NPM alongside being lightweight, minimal and flexible.

 

Angular 7

Angular operates on a component based architecture, and as such, it was easy to reuse components already made, such as the below card component.

 

This is the component class code, written in TypeScript. The main point to note is the selector, which is named 'card' in this instance. This means this component can be displayed in another by adding <card></card> into that component's HTML.

/**
 * Component for individual cards to display sensor data
 * from the Chicken Coop.
 * 
 * @author Joanna Zhang
 * @version 19-01-2019
 */
import { Component, Input, OnDestroy } from '@angular/core';
import { Card } from '../../models/data';
import { UpdateService } from '../../services/update.service'
import { Subscription, timer } from 'rxjs'

@Component({
  selector: 'card',
  templateUrl: './card.component.html',
  styleUrls: ['./card.component.less']
})
export class CardComponent implements OnDestroy {
  // Data input passed in from parent - CardSectionComponent
  @Input() data: Card
  /* Display or not display last updated */
  @Input() noTime: Boolean
  live = true

  updating: boolean = false

  /* Section for unsubscribing from Subscriptions to avoid
  memory leaks */
  subscription: Subscription = new Subscription()

  constructor(private updateService: UpdateService ){}

  /**
  * Update data for this specific card.
  * @param input the name of the sensor
  */
  onClick(input: string) : void {
    let sensor = input.split(" ")[0].toLowerCase();
    this.updating = true
    // Add the subscription to the update service, so it can be destroyed
    // to avoid memory leaks.
    this.subscription.add(this.updateService.updateSensor(sensor).subscribe(res => {
      /// On return value, update the fields.
      this.data.value = res.data
      this.data.time = new Date()
      this.updating = false
    }))
  }

  ngOnDestroy() : void {
    this.subscription.unsubscribe();
  }
}

 

The code below shows the component's template in HTML. As you can see, there are a variety of different syntaxes. There is moustache syntax for displaying data from the component onto the DOM {{}}, and directives such as *ngIf.

 

<!-- Use the Material Card component to render this card -->
<mat-card class="card">
    <mat-card-content>
      <!-- Render the icon, passing in the name of the data type to CardIconComponent -->
      <div>
        <card-icon [type]="data.name"></card-icon>
      </div>
      <!-- Name of the data for clarity -->
      <mat-card-subtitle class="text-right">{{data.name}}</mat-card-subtitle>
      <!-- Display the actual value -->
        <mat-card-title>
        <div class="data text-right">
          <span class="value">{{data.value}}</span>
            <!-- Display the unit -->
            <span *ngIf="data.name == 'Temperature'"><span class="unit">°C</span></span>
            <span *ngIf="data.name == 'Humidity'"><span class="unit">%</span></span>
            <span *ngIf="data.name == 'Dust Density'"><span class="unit">μg/m³</span></span>
            <span *ngIf="data.name == 'Pressure'"><span class="unit">mBar</span></span>
          </div>
        </mat-card-title>

        <hr>
        <!-- Let the user know when it was last updated -->
        <mat-card-subtitle *ngIf="!noTime"> 
          <span *ngIf="data.name != 'Water Level'"> 
          <i *ngIf="!updating" (click)="onClick(data.name)"class="fas fa-sync-alt"></i> 
          <i *ngIf="updating" class="fas fa-spinner fa-spin"></i> 
        </span> 
          
          Last updated: {{data.time | timeago:live }}</mat-card-subtitle>
        <div *ngIf="noTime"></div>
    </mat-card-content>
  </mat-card>

 

The styles for this component is written in LESS:

.door, .container {
    display: flex;
    flex-wrap: wrap;
    justify-content: flex-start;
}

.door-control {
    width: 630px;
    height: 180px;

    .fa-spinner, .fa-check-circle {
        color: #4caf50;
    }

    .error {
        color: #f44336;
    }
}

card-section{
    display: contents;
}

@media (max-width: 992px) { 
    .door-control {
        width: 290px;
        height:auto;
    }
 }

 

This component can now be displayed in another component by referencing the selector as mentioned above. For example, this piece of code is used in the parent component - CardSection.

<card *ngFor="let card of list?.data" [data]="card" class="m-4 pt-1">
</card>

This loops through the array named "list.data" and passes the value to the card component.

 

In the TS section, call was made to this.updateService.updateSensor(sensor). The code in the update class contains:

/**
 * Service to register users
 * 
 * @author Joanna Zhang, Damon Sweeney
 * @version 10-03-2019
 */
import { Injectable } from '@angular/core';
import { HttpClient } from '@angular/common/http';
import { environment } from '../../environments/environment'
import { Observable } from 'rxjs';
import { Update } from '../models/data'

@Injectable({
    providedIn: 'root',
})
export class UpdateService {
    env = environment

    constructor(private http: HttpClient) { }

    /**
     * Force up to get new values from sensor
     * @param sensor String of sensor name
     */
    updateSensor(sensor: string): Observable<Update> {
        return this.http.get<Update>(`${this.env.apiUrl}/update/${sensor}`)
    }
}

A service, in our case, is a class which fetches data from an external resource. In the above example, the service is retrieving data from the /update/ endpoint found on the Express.js server.

 

Please click here to see the source code.

 

Express.js

The express application acts as a middle man between the frontend Angular application and the database and hub. In the above service example, we see that a call has been made to the /update endpoint. In the main entry point of the Express.js server, we need to add a route for this endpoint. The below code says the update endpoint uses the Update controllers.

app.use('/update', require('./update/update.controller'))

 

In update controller, all routes are defined, therefore to update temperature, the endpoint would be /update/temperature:

router.get('/temperature', authorize(), updateTemperature)
router.get('/humidity', authorize(), updateHumidity)
router.get('/dust', authorize(), updateDust)
router.get('/pressure', authorize(), updatePressure)
router.get('/egg', authorize(), updateEgg)
router.get('/door', authorize(), updateDoor)

 

The function updateTemperature is then called:

async function updateTemperature(req, res, next){
    updateService.update("temperature").then(result => {res.json(result)})
        .catch(err => next(err))
}

which in turn calls update(sensor)in update.service.js which makes an update request to the hub and returns back the value.

 

User Interface Design

 

For the User Interface design of the web frontend, we have elected to use common design principles and ideologies to provide a functional and aesthetic experience.

We considered a fully fledged approach of iterative design using a combination of the ‘Five sketches’ method, generating low fidelity designs and testing these against the system user. However, we decided not to pursue this route due to the logistical difficulty of arranging paper prototyping sessions with the client, alongside the small size of our project team.

We used a combination of common design heuristics as well as a tested design ideology to drive the UI of our system. The considerations we will make when designing our UI are outlined later in this document.

In brief we aim to abide by the heuristics outlined by Jakob Nielsen & Rolf Molich to steer our UI towards high usability, the Gestalt principles of design to aid user discoverability and a material design theme to provide user familiarity.

 

Iteration 1 (Research)

Jakob Nielsen Usability Heuristics

When developing our UI we will keep the below design heuristics in mind to help build an interface that provides a strong level of usability minimizing the burden on the user. We will strive to aid the user in avoiding erroneous states, clearly detail system errors and how to recover and ensure that the system status is always clearly visible.

 

Usability Heuristics:

  1. Visibility of system status

    The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.

     

  2. Match between system and the real world

    The system should speak the users’ language, with words, phrases and concepts familiar to the user.

     

  3. User control and freedom

    Users often make mistakes provide a simplistic means for them to return from an erroneous state.

     

  4. Consistency and standards

    Avoid conflicting and confusing use of terminology and actions.

     

  5. Error prevention

    Design with consideration of common errors in mind, aim to build a system in which restricts a user from entering this erroneous state.

     

  6. Recognition rather than recall

    Minimize the memory load burden on the user by making actions accessible and obvious.

     

  7. Flexibility and efficiency of use

    Provide accelerators that allow proficient users to perform tasks quickly.

     

  8. Aesthetic and minimalist design

    Aim to keep dialogues succinct and on topic to avoid diminishing the relative visibility of functionality.

     

  9. Help users, recognise, diagnose and recover from errors

    Error messages should be expressed in plain language, provide a focused description of the problem as well as common steps to solve it.

     

  10. Help and documentation

    Provide documentation to the user should the case arise that they are having issues using the system they are immediately able to lookup and resolve their problem.

 

Gestalt Principles of Design

The Gestalt principles of design are built around the psychological phenomena that humans consider a group of objects as a sum of parts before individually. This thought pattern can be exploited to assist in increasing discoverability and usability of an interface by grouping, connecting or distancing related and unrelated elements.

For the scope of UI design some of the principles have been omitted from the list below as they largely lend themselves better to illustration design. The two omitted are ‘Figure/ground’ and ‘Symmetry and Order’.

 

Design principles:

  1. Similarity

    Design elements that have a high commonality of features, be it colour, size or structure are often considered to be related. This can be used to make the user mentally group alike elements decreasing the burden of discovering features. The grouped objects should share similar behaviour as not to break the users expectation.

    It should be noted that this notion can also be exploited in reverse, using an anomalous element to direct attention towards it.

     

  2. Continuation

    The idea behind continuation is that the users tend to follow directed elements until another object is encountered. This can be utilised to direct the user on how to ‘read’ the page, how they should mentally parse what they are seeing.

    Taking advantage of this can help to ensure that the user receives the information being conveyed in the intended order.

     

  3. Closure

    Closure is the act of the brain comprehending a broken image as a whole, this is often exploited in graphics design to simplify elements whilst maintaining the ability convey what the graphic is or represents.

     

  4. Proximity

    Proximity is the act of grouping elements close to one another implying some association between them; the strongest form of proximity is overlap, where one object partially overlaps another or even fully encloses it.

    This can be used to combine singular elements into a collection that are considered as one or help to reiterate a relationship between a set of common elements.

 

Material Design

Material Design is a design principle created by Google to provide continuity in design across both mobile and web. From their website, the principle is described as:

Material is an adaptable system of guidelines, components, and tools that support the best practices of user interface design. Backed by open-source code, Material streamlines collaboration between designers and developers, and helps teams quickly build beautiful products.

 

Material Design operates on a fresh "ink-and-pen" approach. By using components such as card and paper, the style hopes to portray information clearly using elements that represent real-world objects. The principle is by imitating the real physical world, users' cognitive load will reduce by increasing predictability and reducing ambiguity.

 

We initially did some research into Material Design interfaces, looking at the main ideas and components which we could incorporate into our system.

 

researching design

 

https://material-ui.com/static/images/themes.jpg

The above design stood out as one of the better ways to display live current data due to the simplicity in the card design. The text at the bottom of the cards were user friendly, displaying time in a 'timeago' format, alongside the bold colours and icons for the type of card.

 

researching design

 

https://i.pinimg.com/originals/50/40/43/5040435b8d25e5c32f20eed1192cbe05.png

 

In the above example, the sidebar stood out as it was simplistic but there was a clear separation between the user image and name and the links to different sections that are clickable. The icons next to the links add a nice touch, however, the dark colour background was not desirable due to the low contrast between the white/grey text and icons and the background itself.

 

In our design, we have opted to separate the "system" user from the regular users, isolating the functionality that may break the use of the chicken door to the "system" account.

 

Iteration 2 (Mockups)

After conducting some research, some sketches of the initial design were drawn up on Procreate.

 

Main Dashboard Page

 

sketch of main dashboard page

 

The temperatue, humidity, dust etc. cards were well received, however, the main door control section goes against the design of material cards. Due to time constraints, we started to implement the bottom card section and designed some other ideas for the door control later on.

 

Login Page

 

sketch of login page

 

The login page was designed with simplicity in mind, with the main focus being on the login button. This page should provide feedback if a user has unsuccessfully inputted their password following Jakob Nielsen usability heuristics.

 

History Page

 

sketch of history page

 

It was decided early on that the graphs displaying the historical data should sit on cards to separate the types of data being displayed. As we did not know which graphing library we were going to use, the sketch of the historical data page was brief.

 

After these initial sketch ups were completed, work was done to implement these designs in the code.

 

Iteration 2

The initial design for the main page is as follows:

 

first version of main page

 

Compared to our initial sketch, the bottom display data section has not changed much as it was well received. However a design for the door display was created with a visual to aid the owner to if the door is open or closed. In hindsight, this design was a bad choice - current Info doesn't contain information that is 'current', and the graphic detracts from the data supplied on either side, therefore the upper section needed a complete redesign.

 

first version of system page

 

An initial version of the system page was created, it was difficult to design a system which contained a large text area consisting of the interpreted language code, alongside a small textbox for postal code. With feedback from our supervisor, we felt it was better if the code area was separated out into a new tab, and only data pertaining to system settings should be in the settings page.

 

first version of login

 

The login page was created with guidance from our sketch. Not much of the structure has changed apart from the buttons, and the top bar was removed. This is to make the main card and the login button the centre of attention.

first version of login

 

The registration page followed the same outline as the login page and displaying error messages when user input is wrong.

 

Iteration 3

second version of main page

 

The main page went through another iteration, with the data in the previous 'current info' and 'door info' separated out into the same style cards. The main door control card is in a jumbotron as we were unsure how to display the door data and control buttons. After discussion within the group, a conclusion was made where the jumbotron will be replaced with a double width card.

 

first version of history page

 

The initial history page only contained graphs for monitoring data, with the colour of each graph corresponding to the icons on the main page. It was brought to our attention by Dan that there existed visualizations such as the one below. These provide a better view of the door state, therefore we set to integrate this graph into the door state for the next iteration.

 

timeline approach to displaying on off switches

 

https://www.home-assistant.io/images/lovelace/lovelace_history_graph.png

 

first version of settings

 

The first revision of the settings page consisted of three cards in a line displaying the multiple functions a user can partake. Unfortunately this had the issue where if the email list increased in length, the other cards will look disproportionate. Dan suggested we combine these three cards into one as they all have the same purpose - change user settings, separating them out into three goes against Material Design principles.

 

In the previous iteration, Dan suggested we separate out the code and system settings into two pages. The following were the result of this suggestion.

 

first version of code page

 

By separating out the code into its own section, it was cleaner to add more functionality to the existing code editor. A tab system was added so the user can manage multiple snippets of code. These snippets can be managed individually, such as being deleted or setting the code to be active and running on the hub.

 

first version of system settings

 

As the code section was moved out of the system settings page, the main settings page seemed empty. To counteract this, how the user inputs their location has changed. Instead of inputting their new postcode, the user can click their location on the map and save buttons have been provided to confirm this change. The user inputted door control times are also available to be changed, following the same design as the main cards on the dashboard.

 

Iteration 4

first and final log version

 

In iteration 4, we implented the display of logs from the hub. This page used monospace fonts that resemble fonts in old terminals. As this page only contained text, there was not much to change and this was left as is. Improvements could be made in the future to colour code key words to aid the user in reading the logs.

 

second version of history page

 

This next version of the historical data page contains a timeline displaying the door state. By using google charts, users can hover over a section to see how long the door has been in that certain state for.

 

third version of main page

 

The door control card has been refactored to be of two cards with, with data displaying the state of the door. There is also an action button that the use can press to control the door. Once clicked, this button will show a spinner, notifying the user that the command is executing. Even so this card is done, the layout of the cards still look off, so this will be tweaked in the next iteration.

 

final version of settings page

 

With advice from our supervisor, the settings cards have been merged into one owing to the reasons described in the previous iteration. This makes the page look a lot cleaner. Any deletion of an email and the user will be prompted with a dialog confirming this action. The user can click on the "Upload Picture" box to select an image, or it's also possible to drag and drop an image to this section too.

 

Iteration 5

final version of historical data

 

We quickly realised that the water data can be displayed with a timeline graph as well. The above image shows it being implemented. Also, as shown below, a user can hover over these graphs and read the precise value and time.

 

hover over line graph pop up

 

hover over timeline pop up

 

 

 

second version of main page

 

In this final iteration for the main page, the cards are ordered one after each other rather than being separated. In the future, if time allowed, a new feature could be added so the user can drag and drop to move the cards into the order of their liking. The above picture shows a snapshot of the spinner after clicking on the 'close door' button.

 

final version of system settings

 

The system settings page has been updated with cards displaying the IP address of the hub, alongside the ability to change the system account's password. Improvements could be made whereby the password changer could redirect the user to a separate page only containing this functionality to make functionality clearer. The user is able to clear the current automation time.

 

system settings with clock popup

 

Time is always a tricky datatype to work with, however, we've simplified this for the user as when any of the time fields are clicked, a popup consisting of a clock appears, which the user can select the time on there.

 

system settings with clock popup

 

When automation is off, the user is notified with red text.

 

final version of code page

 

The code page has been slightly updated with the ability for the user to name their individual code snippets. Also the add button is now level with tab.

 

code popup

 

The user will be asked to confirm if they want to send the code to the hub. They will also be prompted when delete is pressed as this is irreversible.

 

broken code error

 

Any code that doesn't compile when being sent to the hub will return an error message describing the syntax error.

 

Iteration 6

first and final version of wizard

 

To aid the user in first time setup, we've created a wizard which prompts for the IP of the main hub, a new password for the system user and their location. As this uses the Stepper Material Component, not many design changes needed to be done.

The user can easily see which step they are on and can jump between previous ones by pressing the back button, or clicking on the edit button next to the previous step. If there is an error between proceeding to the next step, the user will be notified.

We've also provided some basic instructions on how to manage users and write our domain specific language.

 

first and final version of wizard

 

The user management page only displays the data that a system admin is concerned about. The system admin can easily change any of these fields with a simple click.

 

Final Iteration

Throughout these iterations, we have made this interface mobile friendly as well. However, a final iteration was made through this project to catch any non-responsive elements. The below shows the website displayed on a mobile.

User

main page

 

main page

 

historical on mobile

 

historical on mobile

 

user settings on mobile

 

sidebar

 

 

System

 

system setttings on mobile

 

user management on mobile

 

language interface on mobile

 

Misc.

 

login on mobile

 

register on mobile

 

 

The above have been achieved using the in built Material UI libraries who already have mobile support, or using flexbox or Bootstrap Grid. Any other elements that do not utilize these have been made responsive by using CSS media queries.

 

Historical Data

As seen in the previous UI design section, we needed to provide a way for our users to view historical data from the modular network in a user friendly way.

Owing to this, we conducted research into the graphing libraries available to us. After exploring potential options, we opted to use Chart.js as our graphing library as though the chart types were limited, it supported the main ones we needed. It is also open source with easy to read documentation, providing many examples.

The other graphing library we used is Google Charts. Google Charts has the ability to display timeline graphs which are used for the water and door state.

There were wrapper libraries for Angular built around these charting libraries which we used to help speed up development.

 

Line Graphs

 

researching design

 

Line graphs were used to display monitoring data which returned variable values. For example, temperature, humidity.

 

For each of these data types, the below code was created to aggregate the data into the data structure pertaining to the library.

 

public dustData: Array<any> = [
    { data: [], label: 'Dust / ug/m3' }
]
public dustLabels: Array<any> = []

public dustColors: Array<any> = [
    {
        backgroundColor: 'rgba(38, 166, 154, 0.45)',
        borderColor: 'rgba(38, 166, 154, 0.65)',
        pointBackgroundColor: 'rgba(148,159,177,1)',
        pointBorderColor: '#fff',
        pointHoverBackgroundColor: '#fff',
        pointHoverBorderColor: 'rgba(148,159,177,0.8)'
    }
]

 

In this below code, we grab the data from the service and format it to work with the graphing library.

this.subscription.add(this.dataService.getDustValues(10).subscribe(x => {
    x.data.forEach(element => {
        this.dustData[0].data.push(element.value),
            this.dustLabels.push((new Date(element.date)))
    })

    // Needed to auto update chart for dynamic datasets
    // Trick: Reassign to the exposed dataset property into your
    // component so it triggers change detection
    // https://github.com/valor-software/ng2-charts/issues/959#issuecomment-367171535
    this.dustData[0].data.reverse()
    this.dustLabels.reverse()
    this.dustData = Object.assign([], this.dustData);
}))

 

To be displayed on the DOM. canvas is the selector for the library, [datasets], [labels] and other elements in square brackets are the input field names for this component and the values in the "" are the fields from the current components passed to the chart component.

<mat-card class="card">
    <mat-card-title>Dust</mat-card-title>
    <div style="display: block;">
        <canvas height="200%" baseChart [datasets]="dustData" [labels]="dustLabels" [options]="lineChartOptions"
                [colors]="dustColors" [legend]="lineChartLegend" [chartType]="lineChartType"></canvas>
    </div>
</mat-card>

 

Timeline graphs

Timeline graphs were added to the web interface as a better way of displaying data with predictable states. For example, the water sensor can only be true or false and the door can be "open", "closed" or "error".

 

timeline graphs

 

The below code show data being retrieved from the service and formatted to meet the requirements for the Google Charts library.

 

this.subscription.add(this.dataService.getWaterValues(10).subscribe(res => {

    let pastDate
    let pastVal
    for (var iter in res.data) {
        let data = res.data[iter]

        if (!pastDate) {
            pastDate = data.date
            pastVal = data.value ? "present" : "low"
        } else {
            if(pastVal !== data.value.toString()){
                let addData = ['Water', pastVal, this.df(new Date(pastDate)), 
                               this.df(new Date(data.date))]
                this.waterData.push(addData)
                pastDate = data.date
                pastVal = data.value ? "present" : "low"
            }
        }
    }
    let addData = ['Water', pastVal, this.df(new Date(pastDate)), this.df(new Date())]
    this.doorData.push(addData)
    this.doorData = Object.assign([], this.waterData)
}
))

 

The code for the page can be found here.

 

 

Role Based Authorization

We opted to have role based access control to limit the users who are able to access the web interface. By creating two separate types of user, "System" and "User", we were able to limit access to critical functions such as the programmable interface. The User role also has two separate types - the ones who have access to door control and the ones who do not. This can all be changed by the System account.

 

JSON Web Tokens

JSON Web Tokens, also known as JWT, are a standard for generating access tokens that contains information such as if a user is logged in as admin.

In our Express.js server, when the login endpoint is hit, we check if the credentials are correct. Once they are confirmed to be correct, we create a token using the username and role, and signing it with a shared secret. This information is then returned to the front end.

 

const token = jwt.sign({sub: user.rows[0].username, 
                       role: user.rows[0].admin ? Role.Admin : Role.User}, 
                       config.secret)
return {
    userObj,
    token
}

 

The user object is then stored inside a BehaviourSubject in the authentication service. In any request to the server, the token will be added to the Authorization header, done in the JwtInterceptor to be authenticated on the server side.

 

intercept(req: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> {
    let currentUser = this.authenticationService.currentUserVal
    if (currentUser && currentUser.token) {
        req = req.clone({
            setHeaders: {
                Authorization: `Bearer ${currentUser.token}`
            }
        })
    }
	return next.handle(req)
}

 

Server side, routes that require authentication have another callback function. For example:

router.get('/getLocation', authorize(), getLocation)
router.post('/updateLocation', authorize(Role.Admin), updateLocation)

 

In the first example, the authorize() function will only call next() to proceed to the getLocation() method when the user is successfully logged in as any type of user. In contrast, the user accessing the /updateLocation endpoint has to have a role of admin, otherwise an authorized status code of 401 will be returned.

function authorize(roles = []) {

    if (typeof roles === 'string') {
        roles = [roles]
    }

    return [
        // Authenticates users using JWT, 
        // if the token is valid, req.user will be set in the JSON object
        expressJwt({ secret }),  

        // authorize based on user role
        (req, res, next) => {
            if (roles.length && !roles.includes(req.user.role)) {   
                // Unauthorized, return error
                return res.status(401).json({ message: 'Unauthorized' })
            }

            // authentication and authorization successful
            next()
        }
    ]
}
// NB: this code is adapated to work in our system from: 
// http://jasonwatmore.com/post/2018/11/28/nodejs-role-based-authorization-tutorial-with-example-api

 

When the front end receives an authorized error, the user will be forcefully logged out due to the ErrorInterceptor intercepting this response before it can be processed.

 

Guards

Guards in Angular make sure that a user can only navigate to a page under certain conditions. By specifying in the Angular routes which pages need which guard, you can control access to these pages. For example, the login page has a guard to only allow users that are not logged in access to the page.

The routes are as follows:

const routes: Routes = [
  {path: '', redirectTo: 'setup', pathMatch: 'full'},
  {path: 'setup', component: SetupComponent, canActivate:[SetupGuard]},
  {path: 'login', component: LoginComponent, canActivate:[LoginGuard]},
  {path: 'register', component: RegistrationComponent, canActivate:[LoginGuard]},
  {path: 'user', component: SideBarComponent, canActivate:[AuthGuard],
    children: [
      ...
    ]},
  {path: 'system', component: AdminSideBarComponent, canActivate:[AdminGuard],
    children: [
      ...
    ]},
  { path: '**', redirectTo: 'user'}
]

 

LoginGuard:

export class LoginGuard implements CanActivate {

    constructor(private router: Router, 
                 private authenticationService: AuthenticationService) { }

    /**
     * Redirect user to dashboard if already logged in
     * @param route route
     * @param state state
     */
    canActivate(route: ActivatedRouteSnapshot, state: RouterStateSnapshot) : boolean {
        console.log("login guard")
        if (localStorage.getItem('currentUser')) {
            const currentUser = this.authenticationService.currentUserVal
            if(currentUser.userObj.role === Role.Admin){
                // User is logged in as System admin. Redirect to main page. 
                this.router.navigate(['/system'], 
                                     { queryParams: { returnUrl: state.url }})
                return false
            }
            // User is logged in. Redirect to main page. 
            this.router.navigate(['/user/dashboard'], 
                                 { queryParams: { returnUrl: state.url }})
            return false
        }
        // Can navigate to login page - return true.
        return true
    }
}

 

The above code returns true when the user is not logged in, forbidding logged in users to see the login page. A user who is logged in will simple be redirected to the main dashboard page.

For the other guards, please see here.

 

Language Interface

We have provided a web interface which only System admins can use to communicate with and update the domain specific language running on the hub.

 

final version of code page

 

The interface for this is structured with a parent and child component. The parent component, LanguageComponent, is responsible for managing the list of code snippets utilizing mat-tab found in the Angular Material library.

The child, CodeEditorComponent, is responsible for displaying the selected code snippet using CodeMirror as the text editor.

 

The user has the option to

  1. save the code
  2. delete the code
  3. save and send the code to the hub
  4. save and delete from hub

 

The backend provides APIs which aid this process.

 

GET /code/getSourceCode

HTTP Status Code Response
200 [{id, name, code, date_created, last_modified, is_active},...]
400 { message: 'Unable to get source code' }

Retrieves a list of code objects.

 

POST /code/updateSourceCode

HTTP Status Code Response
200 true

This endpoint simply updates the code in the database.

 

POST /code/toggleActiveCode

HTTP Status Code Response
200 {compiles: true | false, message: "", saved: true | false }

This endpoint is used to toggle if the code is active or not. This initially calls the endpoint on the hub code/check to see if the code is syntactically correct. Once confirmed, a check is made to see if the code needs to be set to be inactive or active. If the code needs to be set as active, the function will disable the previously enabled code to set this one to be active. This function will then call the hub to refresh the list of active code.

 

POST /code/saveNewCode

HTTP Status Code Response
200 true

This endpoint simply inserts code into the database.

 

DELETE /deleteCode/:codeId

HTTP Status Code Response
200 true

This endpoint will remove the code with the specified ID from the database.

 

 

Future Work

Due to time constraints in the project, there are extra features we would like to implement if time allowed.

 

User customizable interface

We built this web interface with the idea in mind that users can drag and drop the cards on the dashboard to reorganise or remove cards to their liking. Historical data cards could also be added by the user to the main page, focusing the user on the readings they want to see.

 

Integrate emails

Instead of only relying on the programmable language to send emails and SMS, it could be made so that the user can set up their own alerts on the web interface.

 

Specify time period

When viewing data from the historical tab, or the logs, data is only shown for a set past period. It would be better if the user can specify the time period to view.

 

In App notifications

It was brought up during a discussion that we could implement in-app notifications. Using guidelines for temperature and dust levels for example, notifications could be implemented when any of these values are out of bounds. Water can also send an in app notification if levels are low.