Geostream TM

How can businesses improve mobile customer experience?

Geostream TM

GETTING CLOSER TO YOUR MOBILE USERS – NO MATTER WHERE THEY ARE IN THE WORLD.

A couple of years ago one of our clients approached us with a difficult question:

“How can we understand and improve mobile game performance and player experience in targeted locations around the World?”

This was not easy to answer.

Simulating current game performance

The first step was figuring out how to simulate game performance on mobile devices across different geographical regions and mobile networks.

We considered simulating latency and throughput via selected data centre virtual machines with traffic shaping, to introduce errors and real-world experiences, but we quickly shelved this idea as it does not account for the many factors that may impact mobile network performance.

You may ask, “Why is this a challenge?”, so think about these problem statements:

What is a person’s experience in Accra Ghana or Novosibirsk Siberia?
Why are some geographies or locations performing poorly?
Which mobile networks indicate worst player experience and why?
How do I measure any improvements once implemented?

We concluded that there was no solution available for our customer. We promptly set about building one.

The idea, created by the team, was to build a device which would mimic a mobile phone, a customer and that could be deployed on site at each targeted location. Using physical mobile phones was not a viable option due to battery, heat and other reliability issues.

The mobile devices needed to fit the following criteria:

  • Capable of running 24/7 on a continuous basis.
  • Be compatible with local SIM cards for the mobile networks.
  • Processor capacity should be close to that of a mobile phone.
  • Must be small, transportable and robust.
  • The device would need to be completely automated, with almost no user intervention.

Our team tested a number of microcomputer options and finally settled on a customised version of the Raspberry Pi4. Additional 4G modules, external aerials, a cooling system and a custom designed aluminium housing were added to the package. Delivering the capacity of 4 mobile networks (SIMs) per housing, now named GeoStream™.
Geostream TM

Each GeoStream™ enables 4 mobile networks to be performance tested and monitored per location.

Data-driven decisions are the core of TechConnect’s capability, delivering solutions to our customers and unlocking value from data. It, therefore, goes without saying that we must collect and analyse the data. GeoStream™ includes the ability to:

  • Capture the data and send it back to a centralised datalake in Australia.
  • Deploy multiple devices per location for broader testing and data collection.
  • Securely house and cool devices to deliver robustness in remote locations.
  • Reporting and analytics platform to analyse the data and enable data-driven decisions.
  • A scheduling platform for scheduling activity and runbooks.

Collecting the data

Custom Python scripts automatically connect the devices back to the datalake and control system in Sydney. Data is analysed via a web-based front-end application.

The GeoStream™ front-end is built on Angular and utilises a number of Amazon Web Services capabilities. The front-end gives administrators the ability to assign role-based access controls for roles such as admin, tester, scheduler and analytics.

The first revision of GeoStream™ was released late 2019, with the very first unit being deployed to Ghana. GeoStream™ has assisted to deliver improvements across both the MTN and Vodafone networks in Accra. The Ghana unit continues serving analytics and test result data to the datalake in Sydney. GeoStream™ revision 2 was released in July 2020 and is destined for regions throughout Canada, New Zealand, India, Taiwan, London and Estonia later in 2020, with many more expected to follow.

GeoStream™ has delivered a solution to mobile network performance analysis, from within country and satisfying many of the challenges facing mobile network analysis. The product provides a commercially viable tool that gives our client an accurate view of their mobile users experience on all their applications, no matter where they are and at any time of day.

“We love to innovate and when innovation intersects customer value we have made a real impact.” Mike Cunningham – TechConnect CEO.

TechConnect plans to roll out many GeoStream™ devices in conjunction with our private content delivery network, named Slipstream, to improve mobile player experience for our customer’s customer.

Gold Coast Excellence - Queensland

Gold Coast Excellence Awards 2020

Gold Coast Excellence Awards 2020 - IT and Digital

The Gold Coast Business Excellence Awards launched in 1996 and in 2020 are celebrating our 25th year! During this time the Awards have grown to be recognised as the region’s most comprehensive and prestigious business awards scheme, offering specific and meaningful benefits to the wider Gold Coast business community.

TechConnect is super proud to support the Gold Coast community, growing locally, nationally and internationally.  With a presence in Brisbane, Melbourne, Perth and South Africa we have grown from our humble base in the Gold Coast, Queensland.

This was the first year that TechConnect has entered this award and are thrilled to have won the category of IT & Digital Business. We look forward to the annual awards and competing with some of the amazing businesses on the Gold Coast.

Gold Coast Excellence Awards 2020 Queensland

 

Pictured below is Mike Cunningham – CEO of TechConnect (left) and Fabrizio Carmignani – Professor of Economics Griffth University (right).

Gold Coast Excellence Awards

TechConnect IT Solutions_Data

Digital Transformation with Data

TechConnect IT Solutions_Data

Harnessing Data to drive effective digital transformation

The COVID-19 pandemic has made clear that businesses need to be prepared for flexible, remote working practices.

As lockdowns forced offices to close and people headed home to limit the potential spread of the virus, many organisations found they weren’t prepared to provision the necessary work from home (WFH) technology and processes for their staff to continue with business as usual.

As a result, businesses have been required to undertake (or accelerate) a significant digital transformation journey to get up to speed. As these transformation journeys roll out, the need to harness data effectively becomes more critical than ever for a successful, long term change. Here’s what you need to consider.

A strategic approach

Before beginning a digital transformation, it’s critical to have a strategy in place to explain how you will manage, store, secure and use your data. Yet, this is a step that’s often forgotten in the rush to transform and digitise processes.

A data strategy should be driven by the needs of your business. Your strategy will also define how to make decisions about the use of data, more capably manage data flow, and secure information effectively.

Any successful plan will identify realistic goals along with a road map for rolling it out. This ensures that you’re properly prepared for every step of the journey.

Beginning the journey

A digital transformation unshackles an organisation from the past. It empowers you to move into the future, free of outdated technology and slower manual processes.

For example, take mobile and cloud technology. While we were once restricted to an office environment for productive working, it’s now possible for geographically diverse teams to collaborate as efficiently as they would in a traditional office setting. Files, apps, and other resources can all be accessed remotely, and meetings held virtually, giving workplaces and workforces the ability to be truly flexible.

However, the reality of a digital transformation is that with staff spread across locations, there are a range of new infrastructure management issues to consider. Chief among these is data security.

Keeping data safe is vital as users access business networks and devices remotely, often without the protection provided by robust on-site architecture. It’s important to decide how you’ll service and secure company devices, and how to make sure users and the data they handle and generate will be protected, and implementing those systems early.

Harnessing the power of data

With a clear data strategy in place and your digital journey underway, you can start to take advantage of the power of your data and use it to drive improved decision-making internally and externally.

Artificial intelligence (AI) and machine learning (ML) can be used to sort your unstructured data, learning as they go to uncover valuable insights. Once the data has been cleansed, you can enrich it by adding third-party data or public datasets to uncover more hidden insight.

The adoption of AI and ML also frees your people for bigger picture tasks. Instead of manually sorting through stacks of data, they can concentrate on delivering valuable and creative work powered by the insights you’ve identified – ultimately working towards the goals outlined in your data strategy.

Gathering data helps to deliver external benefits to your business too. It can improve customer service by identifying current pain points or uncover new customer segments for targeting – the possibilities are endless!

The lesson in the journey

Businesses shouldn’t underestimate the change that needs to be undertaken in digital transformation journeys. They require significant planning and thought before beginning. However, while the challenge is large, data can make the journey less difficult and more successful.

Accessible, accurate and relevant data enables businesses to make better informed decisions and deliver actionable insights. And by establishing a data strategy up front, you can better understand, apply and secure your data to meet the needs of your organisation.

If you’re asking yourself questions such as “Are we doing things the right way?” or “Can we do this better?” why not get in touch and let’s explore how TechConnect can deliver results for your business as you undertake a digital transformation.

TechConnect achieves AWS Data and Analytics Competency

TechConnect Directors - Amazon Web Services - Data and Analytics Competency

AWS Data and Analytics Competency

Proves technical proficiency, operational excellence, security, reliability and 360-degree customer delivery capability; Cites major client projects with Virgin’s Velocity Frequent Flyer and IntelliHQ.

TechConnect IT Solutions (TechConnect), a leading provider of cloud services and an Amazon Web Services (AWS) Advanced Consulting Partner, today announces it has been awarded AAWS Data and Analytics Competency certification; the only Advanced Consulting Partner in Australia to achieve this prestigious competency level.

The AWS Competency Program recognises partners who demonstrate technical proficiency and proven customer success in specialised solution areas. TechConnect undertook a rigorous partner validation process to be awarded the certification, including an independent audit of its technical, organisational, governance and customer capabilities; along with scrutiny of large scale, in-production customer deployments.

Customer case studies that were reviewed as part of the certification process include a customer insights project with Velocity Frequent Flyer; a predictive medicine data platform for IntelliHQ that uses heart rate variability to predict patient outcomes; and a big data analytics project with Kamala Tech that gave technical users data to form insights across the business including areas such as data science, machine learning, marketing systems, reporting and self-service capabilities.

As healthcare comes under more and more pressure to deliver quality personalised care under constrained budgets the healthcare industry is seeing innovation with the use of data to drive efficiencies and better patient outcomes. As Machine Learning and Artificial Intelligence (AI) emerge as a driver for businesses to do more with less, healthcare can deliver better care with the same resources using data to drive out those efficiencies.

“We partner with industry in AI as we need as many talented and gifted people in this space as possible. said Dr Brent Richards. is the Medical Director of Innovation – Gold Coast Hospital and Health Service (GCHHS). “There is a lot that the industry can bring that healthcare specifically does not have in terms of hardware, software and talent.” Dr Richards played a key role in the project with IntelliHQ which is a partnership between Gold Coast Health, industry and universities to transform healthcare through AI, enhancing patient outcomes and improving quality of care, while maximising cost-effectiveness.

Oliver Rees, Chief Analytics Officer with Virgin’s Velocity Frequent Flyer program has commended TechConnect’s expertise and cites the direct benefits for member experience. “Velocity have always been a company that is passionate about using insights to understand and improve on their members’ experience. Velocity worked with TechConnect to build a platform that would allow Velocity to combine member insights in a single location to make it easier for members to then receive relevant program offers,” said Oliver Rees, Chief Analytics Officer – Velocity Frequent Flyer.

“In achieving this level of competency with AWS, TechConnect has demonstrated our ability to help customers solve their most challenging data problems within large scale production deployments. We proved that we have deep expertise in designing, implementing, and managing Data and Analytics applications on the AWS platform and have delivered solutions seamlessly in the AWS Cloud environment,” said Clinton Thomson, Director of TechConnect IT Solutions.

TechConnect is a fast-growing Australian company, headquartered in Queensland and serving clients around Australia and worldwide. TechConnect helps customers extract business value from data and it has plans to grow its team to 100+ people over the next three to five years, creating graduate employment and professional development opportunities in Queensland and throughout Australia. The company has offices in Brisbane and the Gold Coast and has a graduate pathways program for top students in the STEM fields.

CRN Fast50 Award

CRN Fast50 for 2019 Award

CRN Fast50 - Number 15TechConnect was listed in the CRN Fast50 for 2019, settling into the 15th position in our debut year. A day after receiving the Deloitte’s Technology Fast 50 awards we were again presented a great result. Read about the Deloitte’s Technology Fast 50 here if you missed it.

The CRN Fast50 award, now in its 11th year, recognises the fastest-growing companies in the Australian IT channel, based on year-on-year revenue growth. “The 2019 CRN Fast50 put up astounding numbers: they grew at least 15 times faster than Australia’s economy.” – Simon Sharwood, Editorial Director – CRN. CRN Fast50 Award

This was the first year that TechConnect has entered this award and are thrilled to be listed 15th, with a growth rate of 67%. Simon Sharwood also stated, “It’s a huge achievement to have made it into the CRN Fast50. Your company’s growth not only outpaced most others in our industry, it also vastly exceeds Australia’s current overall growth rate!”.

This is a great achievement for us, we would like to thank our customers and team, without them this would not have been possible. Team TechConnect would also like to extend a congratulations to all the other winners. TechConnect looks forward to expanding our growth and moving up the list in coming years.CRN Fast50 - Clinton Thomson

Deloittes Technology Fast50

Deloitte – Technology Fast 50 Australia 2019

Deloittes Fast50 AwardWe are extremely excited and honoured to have been listed as one of Deloitte’s Technology Fast 50 Australian companies. Ranking 43rd on the list with a growth rate of 161%, in our debut year. This is a huge achievement not only for the company but also for our team and the hard work they have put in to provide solutions for our customers.

“Now in its 18th year in Australia, the Deloitte Technology Fast 50 program ranks fast growing technology companies, public or private, based on percentage revenue growth over three years.” – DeloitteDeloittes Technology Fast50

“More than ever, this year showcases world-leading Australian business innovation and it is a tremendous achievement for any company to be named among the Deloitte Technology Fast 50,” Deloitte Private Partner and Technology Fast 50 Leader, Josh Tanchel.

With our significant growth over the years, our hard-working team and most importantly our valued customers, we were able to achieve this amazing outcome. TechConnect is getting ready for exponential growth, through expanding into new markets and deep data specialisations.

The awards night was held on Wednesday, 20th of November in Sydney at the Museum of Contemporary Art. Clinton, our Director, was there to accept the award on behalf of TechConnect. “It was a great night; Well done to all the other recipients and thank you to Deloitte’s for putting it all together.” Clinton Thomson, TechConnect IT Solutions.Deloitte Fast50 - Clinton Thomson

IntelliHQ - AWS - ECG Live Stream

IntelliHQ uses Machine Learning on AWS to create a web-based ECG live stream

The coming 4th Industrial Revolution – an exponential convergence of digital, physical, technological and biological advances – will transform industries over the course of the 21st century. No single innovation will lead this impending revolution, but one thing is clear: Artificial Intelligence (AI) will be at the forefront.

Healthcare is among the foremost industries ripe to be revolutionized by AI and the 4th Industrial Revolution. By enabling ground-breaking advances in healthcare digitisation, AI is expected to significantly contribute to medical advances and lead to marked improvements to healthcare delivery.

 

Embracing AI to revolutionize healthcare

A not-for-profit partnership between Gold Coast Health, industry and universities, IntelliHQ (Intelligent Health Queensland) is dedicated to promoting research, investment, and monetization of next-generation AI and machine learning technologies in the health sector. It aims to enhance patient outcomes, improve quality of care, create opportunities for skills, jobs and venture development, and encourage investment. By harnessing trusted AI, IntelliHQ aspires to become a globally recognised healthcare innovation and commercialization hub.

Building a global healthcare AI capability promises to deliver significant benefits, not only by relieving pressure on the medical system resulting from spiralling costs, but also by contributing to broader technological economic growth, global competitiveness, and skill creation.

But to realize its aspirations, IntelliHQ has to overcome several challenges and obstacles to AI adoption in healthcare. It needs to build community trust, and maintain secure access to patient data. As such, IntelliHQ needed a technology partner to enable it to achieve its goals.

 

IntelliHQ engages TechConnect

To enable its key initiatives, IntelliHQ worked with AWS Partner Network (APN) Advanced Consulting Partner, TechConnect to create a web-based ECG live stream with machine learning annotations as a proof-of-concept. This requires numerous cloud technologies for data security, storage, transformation, and means for deployment.

An ECG signal can be categorised into either a normal (healthy) signal or various types of abnormal (unhealthy) classifications, such as atrial fibrillation. Key health indicators are the standard deviation of the length of time between peaks and troughs in an ECG signal, and the ratio between low frequency vs high frequency signals.

It’s possible to generate annotations of these intervals and apply an algorithm to the resulting data to make classifications without machine learning. But given the ability for machine learning to make increasingly sophisticated classifications and predictions, applying these technologies to health data presents many advantages.

 

The Solution

To extract ICU data to the cloud, IntelliHQ used TechConnect’s Panacea toolset, a C# library and Windows service that connects to a GE Carescape Gateway High Speed Data Interface, subscribe to data feeds and push these data feeds to the cloud via AWS Kinesis Firehose, Amazon’s high speed data ingestion service.

As actual ICU data could not be streamed from a hospital to the cloud until all custodianship processes has been finalised, this proof-of-concept utilised a demonstration monitor to simulate a typical healthy heartbeat, several abnormal rhythms, and test with various combinations of leads connected.

Once subscribed to a data feed, the Windows service inspects each packet to identify source information, and then converts the ECG waveform and extra numeric data into a hexadecimal representation for transport to the cloud. The source information is parsed by an AWS Lambda function, which then writes the data into an Amazon S3 folder structure that uses folder names for a simple organisational scheme.

Parameters are extracted from the AWS Kinesis Firehose stream data by the Amazon Lambda function in a structure format that reflects the default naming convention for Hive partitions on Amazon S3. Many AWS and Hadoop compatible tools (like Amazon Athena) can use this partitioning scheme to efficiently select subsets of data when queried by these parameters.

Once the data was uploaded to Amazon S3 via Panacea in its proprietary raw hex format, they required conversion to a broadly supported file format using an Amazon EMR cluster to handle the parallel transform operations, for which Zeppelin was chosen. It exposes a web-accessible notebook interface to run code and display resulting visualisations, and each notebook consists of a sequence of cells which can utilise a different parser. This provided easy access to Spark, SQL and Scala for data analytics.

IntelliHQ then unpackaged the raw waveform and numerics data into a human readable tabular file format using a combination of PySpark and SparkSQL code. They chose the widely supported csv format, and created annotation files that labelled normal vs abnormal waveform periods. The next data preparation steps took place in Amazon Sagemaker, which gave access to serverless resources for training and deployment. After the addition of front end graphics, the resulting visualization displays in a simple UI. The code is then packaged with Amazon Elastic Container Service for deployment.

ECG Live Stream - IntelliHQ - AWS

 

The Benefits

The proof-of-concept process results in significant query time improvements and, for tools like Amazon Athena, substantial query cost savings. Furthermore, the folder structure is self documenting and unambiguous, allowing a highly decoupled architecture that can handle any future growth in data volume.

Also the data is non-public, and AWS Identity and Access Management (IAM) credentials establish who has access rights. Security tags and security groups enable IntelliHQ to allocate who has access to data at different stages of preparation. Because these cloud services come with baked-in security, IntelliHQ can ensure they have strong control over health data.

 

The Outcome

The web-based ECG live stream proof-of-concept showed that efficient, secure web services enable IntelliHQ to build effective, scalable cloud solutions. By embracing AI and machine learning, IntelliHQ will continue to advance the possibilities of commercialized healthcare innovation.

 

Download white paper here

What's the difference between Artificial Intelligence (AI) & Machine Learning (ML)?

What’s the difference between Artificial Intelligence (AI) & Machine Learning (ML)?

What’s the difference between Artificial Intelligence (AI) & Machine Learning (ML)?

The field of Artificial Intelligence encompasses all efforts at imbuing computational devices with capabilities that have traditionally been viewed as requiring human-level intelligence. 

This includes:

  • Chess, go and generalised game playing 
  • Planning and goal-directed behaviour in dynamic and complex environments 
  • Theorem proving, proof assistants and symbolic reasoning 
  • Computer vision  
  • Natural language understanding and translation 
  • Deductive, inductive and abductive reasoning 
  • Learning from experience and existing data 
  • Understanding and emulating emotion 
  • Fuzzy and probabilistic (Bayesean) reasoning 
  • Communication, teamwork, negotiation and argumentation between self-interested agents 
  • Early advances in signal processing (text to speech) 
  • Music understanding and creation 

Like intelligence itself it defies definition.

As a field, it predates Machine Learning and Machine Learning was seen as an early sub-field. Many things that are obvious or no longer considered AI have their roots in the field. Many database models (hierarchical, network and relational) have their roots in AI research. Optimisation and scheduling were early problems tackled under the umbrella of AI. Minsky’s Frame model reads like an early description of Object Oriented programming. LISP, Prolog and many other programming languages and programming language properties emerged as tools for or as a result of AI research.

Neural networks (a sub-field of machine learning) emerged in the 80s in the form of perceptrons and were heavily studied until it was demonstrated that a perceptron was unable to calculate XOR. However, with the invent of error back propagation over networks of perceptrons (a way to systematically train the weights between neurons) it was shown that neural networks have equivalent computational power to universal turing machines (if it can be computed on a turing machine a correctly configured neural network can also implement that same function).

With the invent of Deep Learning in the 2010s the popularity of machine learning has soared as great successes have been achieved using the approach. Due to limits on computational power, traditional neural networks were trained on meticulously human engineered features of the datasets, not the raw datasets themselves. With the progress in cloud, gpus and distributed learning it became possible to create much larger and deeper neural networks. This progressed to the point that large raw datasets could be used directly to train with and get predictions from. In so doing the neural networks extract their own features from the data as part of this process. Many of the recent advances have been achieved due to this (in addition to better neuron activation functions, faster training algorithms, new network architectures).  The successes have also inspired people to use Deep Learning as a means of solving some of the other problems in general AI (as discussed above) and this may explain why a convergence or confusion between AI and Machine Learning is perceived by many.

AWS DeepLens TechConnect IoT Rule

AWS DeepLens: Creating an IoT Rule (Part 2 of 2)

This post is the second in a series on getting started with the AWS DeepLens. In Part 1, we introduced a program that could detect faces and crop them by extending the boilerplate Greengrass Lambda and pre-built model provided by AWS. This focussed on the local capabilities of the device, but the DeepLens device is much more than that. At its core, DeepLens is a fully fledged IoT device, which is just one part of the 3 Pillars of IoT: devices, cloud and intelligence.

All code and templates mentioned can be found here. This can be deployed using AWS SAM, which helps reduces the the complexity for creating event-based AWS Lambda functions.

Sending faces to IoT Message Broker

AWS DeepLens Device Page

AWS DeepLens Device Console Page

When registering a DeepLens device, AWS creates all things associated with the IoT cloud pillar . If you have a look for yourself in the IoT Core AWS console page, you will see existing IoT groups, devices, certificates, etc.. This all simplifies the process of interacting with the middle-man, the MQTT topic that is displayed on the main DeepLens device console page. The DeepLens (and others if given authorisation) has the right to publish messages to the IoT topic within certain limits.

Previously, the AWS Lambda function responsible for detecting faces only showed them on the output streams and was only publishing to the MQTT topic the threshold of detected faces. We can modify this by including cropped face images as part of the packets that are sent to the topic.

The Greengrass function below extends the original version by publishing a message for each detected face. Encoded cropped face images are set in the “image_string” key of the object. IoT messages have a size limit of 128 KB, but the images will be well within the limit and encoded in Base64.


# File "src/greengrassHelloWorld.py" in code repository
from threading import Thread, Event
import os
import json
import numpy as np
import awscam
import cv2
import greengrasssdk

class LocalDisplay(Thread):
    def __init__(self, resolution):
    ...
    def run(self):
    ...
    def set_frame_data(self, frame):
    ....

    def set_frame_data_padded(self, frame):
        """
        Set the stream frame and return the rendered cropped face
        """
        ....
        return outputImage

def greengrass_infinite_infer_run():
    ...
    # Create a local display instance that will dump the image bytes
    # to a FIFO file that the image can be rendered locally.
    local_display = LocalDisplay('480p')
    local_display.start()
    # The sample projects come with optimized artifacts,
    # hence only the artifact path is required.
    model_path = '/opt/awscam/artifacts/mxnet_deploy_ssd_FP16_FUSED.xml'
    ...
    while True:
        # Get a frame from the video stream
        ret, frame = awscam.getLastFrame()
        # Resize frame to the same size as the training set.
        frame_resize = cv2.resize(frame, (input_height, input_width))
        ...
        model = awscam.Model(model_path, {'GPU': 1})
        # Process the frame
        ...
        # Set the next frame in the local display stream.
        local_display.set_frame_data(frame)

        # Get the detected faces and probabilities
        for obj in parsed_inference_results[model_type]:
            if obj['prob'] > detection_threshold:
                # Add bounding boxes to full resolution frame
                xmin = int(xscale * obj['xmin']) \
                       + int((obj['xmin'] - input_width / 2) + input_width / 2)
                ymin = int(yscale * obj['ymin'])
                xmax = int(xscale * obj['xmax']) \
                       + int((obj['xmax'] - input_width / 2) + input_width / 2)
                ymax = int(yscale * obj['ymax'])

                # Add face detection to iot topic payload
                cloud_output[output_map[obj['label']]] = obj['prob']

                # Zoom in on Face
                crop_img = frame[ymin - 45:ymax + 45, xmin - 30:xmax + 30]
                output_image = local_display.set_frame_data_padded(crop_img)

                # Encode cropped face image and add to IoT message
                frame_string_raw = cv2.imencode('.jpg', output_image)[1]
                frame_string = base64.b64encode(frame_string_raw)
                cloud_output['image_string'] = frame_string

                # Send results to the cloud
                client.publish(topic=iot_topic, payload=json.dumps(cloud_output))
        ...

greengrass_infinite_infer_run()

Save faces to S3 with an IoT Rule

The third IoT pillar intelligence interacts with the cloud pillar, which uses insights to perform actions on other AWS and/or external services. Our goal is to have all detected faces saved to an S3 bucket in the original JPEG format before we encoded it to Base64. To achieve this, we need to create an IoT rule that will launch an action to do so.

IoT Rules listen for incoming MQTT messages of a topic and when a certain condition is met, it will launch an action. The messages from the queue are analysed and transformed using a provided SQL statement. We want to act on all messages, passing on data captured by the DeepLens device and also inject the “unix_time” property. The IoT Rule Engine will allow us to construct statements that do just that, calling the timestamp function within a SQL statement to add it to the result, as seen in the statement below.


# MQTT message
{
    "image_string": "/9j/4AAQ...",
    "face": 0.94287109375
}

# SQL Statement 
SELECT *, timestamp() as unix_time FROM '$aws/things/deeplens_topic_name/infer'

# IoT Rule Action event
{
    "image_string": "/9j/4AAQ...",
    "unix_time": 1540710101060,
    "face": 0.94287109375
}

The action is an AWS Lambda function (seen below) that is given an S3 Bucket name and an event. At a minimum, the event must contain properties: “image_string” representing the encoded image and “unix_time” which used for the name of the file. The last property is not something that is provided when the IoT message is published to the MQTT topic but instead is added by the IoT rule that calls the action.


# File "src/process_queue.py" in code repository
import os
import boto3
import json
import base64

def handler(event, context):
    """
    Decode a Base64 encoded JPEG image and save to an S3 Bucket with an IoT Rule
    """
    # Convert image back to binary
    jpg_original = base64.b64decode(event['image_string'])

    # Save image to S3 with the timestamp as the name
    s3_client = boto3.client('s3')
    s3_client.put_object(
        Body=jpg_original,
        Bucket=os.environ["DETECTED_FACES_BUCKET"],
        Key='{}.jpg'.format(event['unix_time']),
    )

Deploying an IoT Rule with AWS SAM

AWS SAM makes it incredibly easy to deploy an IoT Rule as it is a supported event type for Serverless function resources, a high-level wrapper for AWS Lambda. By providing only the DeepLens topic name as a parameter for the template below, a fully event-driven and least privilege AWS architecture is deployed.


# File "template.yaml" in code repository
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'

Parameters:
  DeepLensTopic:
    Type: String
    Description: Topic path for DeepLens device "$aws/things/deeplens_..."

Resources:
  ProcessDeepLensQueue:
    Type: AWS::Serverless::Function
    Properties:
      Runtime: python2.7
      Timeout: 30
      MemorySize: 256
      Handler: process_queue.handler
      CodeUri: ./src
      Environment:
        Variables:
          DETECTED_FACES_BUCKET: !Ref DetectedFaces

      Policies:
        - S3CrudPolicy:
            BucketName: !Ref DetectedFaces

      Events:
        DeepLensRule:
          Type: IoTRule
          Properties:
            Sql: !Sub "SELECT *, timestamp() as unix_time FROM '${DeepLensTopic}'"

  DetectedFaces:
    Type: AWS::S3::Bucket