Department of Computing

Demonstration of machine learning function orchestrator (MLFO) via reference implementations

LYIT/ITU-T AI Challenge

Participation

Letterkenny Institute of Technology (LYIT) is glad to promote ITU Artificial Intelligence/Machine Learning in 5G Challenge.

The competition is open and will close at the end of the year. Participation to this competition is free and open to all interested parties from the ITU member states. The competition includes regional competition, global competition, and finals conference (online). The topics of the competition includes network track, enabling track, and vertical industry track.

Further details about this competition can be referred from the ITU AI/ML 5G Challenge: Participation Guidelines.

Registration: Open

Please, use the following link to register for this challenge:

Register

Overview

[ITU-T Y.3172] specified machine learning function orchestrator (MLFO) as an architecture component for integration of AI/ML in future networks including 5G. This is further extended by [ITU-T ML5G-I-248] which presents detailed requirements and APIs for MLFO. The main objective of MLFO is to provide integration, orchestration, and management of pipeline nodes and dependencies in an ML pipeline while reducing operational costs. To handle pipeline dependencies, MLFO can infer relationship between different pipeline node, e.g., training and inference to improve ML pipeline automated model deployment. MLFO offers a unified architecture to facilitate the orchestration of end-to-end ML workflows including data collection, pre-processing, training, model inference, model optimization, and model deployment. It can monitor and evaluate ML pipeline instances to optimize its performance by taking appropriate decisions including model update, retraining, model redeployment, and model chaining, etc. Further, MLFO aims to facilitate an easy integration with the existing ML frameworks including [[ITU-T Y.3173], [ITU-T Y.3174], serving engine [ML5G-I-227], ML sandbox [ML5G-I-234], and ML marketplace [ITU-T Y.ML-IMT2020-MP].

One of the important objectives of MLFO is to hide underlying complexities of orchestrating ML pipeline nodes by providing an abstraction to the users and developers with the help of high-level APIs. Moreover, it targets to address the challenge of running multiple pipeline workflows in parallel and ensure that these pipeline workflows do not affect each other.

MLFO architecture is expected to provide flexibility, reusability, and extension of the ML pipeline to accommodate the rapid pace and development in ML pipeline nodes, e.g., ML models. MLFO can achieve this by splitting ML pipeline with flexibility to reuse the ML pipeline nodes to offer specialized services.

Problem statement

The goal of this challenge is to support a reference implementation of MLFO. Based on the detailed study of multiple use cases, requirements, and reference points as explained in the references, MLFO presents an interesting challenge of its reference implementation.

Considering the progress in open source service orchestration mechanisms e.g. ONAP SO project [ONAP SO], ETSI MANO [ETSI OSM], open source AI/ML marketplaces [ACUMOS] and simulation platforms [KOMONDOR], interesting reference implementations which can prove specific concepts mentioned in the ITU-T specifications are possible. This problem statement is covered under the enabling track.

Specific Concepts:

[ITU-T ML5G-I-248] specifies the following scenarios for MLFO interaction with various other entities:

  • Handling ML Intent from operator: this provides a mechanism for operator to input the details of the ML use cases via the ML Intent as specified in [ITU-T Y.3172].
  • Control of model management, e.g., selection, training and deployment using MLFO, in coordination with Sandbox and Serving framework. NOTE- No dataset is required for the model management implementation, only meta-data should suffice.
  • Interaction with ML Marketplace.
  • Handling of asynchronous trigger operations from different architecture components to the MLFO.

Evaluation criteria

Our competition schedule is divided into two stages: Phase I and Phase II. These two stages need to submit different competition works.

Phase I:

Project (full marks: 40) Evaluation Standard

Selection of concept demo (10 marks)

  1. Clarity of demo statement
  2. Traceability to ITU-T specifications
  3. Proof of concept demo plan

Design methodology (15 marks)

  1. Clarity in demo goals
  2. Use case diagram/flow chart
  3. Architecture diagram
  4. Opensource used

Test Setup & Timeline (15 marks)

  1. Details of the test setup
  2. Tracing to requirements and design

Total

40 marks

 

Phase II:

Project (full marks: 60) Evaluation Standard

Report + PPT (20 marks)

Detailed report including: i) Demo problem statement, ii) Motivation, iii) Challenges, iv) Milestones achieved, v) Methodology: system design, flow chart, vi) Results and discussion vii) Conclusion

DEMO completion (40 marks)

Demonstratable solution: PoC which maps to the MLFO specification is a must.

Points to take care: Flexibility in possible extensions, potential adaptations and integrations, complete scenario.

Total

60 marks

Final submissions and winners

Based on the evaluation phase, participant teams will be ranked and top 3 solutions of this challenge will have access to the global round of the ITU AI/ML in 5G Challenge. The global round will nominate three best solutions. Top 3 teams will receive certificates of appreciation with recognition on the website.

Submission information (coming soon)

All participants of the MLFO reference implementation are required to register at the ITU website and also enroll the teams using the following email (COMING SOON). We will notify for the team enrolment in 3-4 days. In the email, inform the team name, the name of each participant (recall that each one must have registered individually at the mentioned ITU website) and an email for contact.

Timeline

  • Registration open to participants: 31st July 2020
  • Global Round duration: July - November 2020
  • Phase I submission: 20th September 2020
  • Phase II submission: 20th October 2020
  • Evaluation: October 30th- November 15th
  • Winners (top 3) official announcement:  November 30th 2020
  • Awards and presentation: December 2020

NOTE- Please note that these dates are tentative and may change slightly over the course of the challenge.

Organisers

Thomas Dowling (Head of Computing Department, LYIT)

Thomas Dowling

Thomas.Dowling@lyit.ie

Shagufta Henna (Lecturer, LYIT)

Shagufta Henna

Shagufta.Henna@lyit.ie

Eoghan Furey (Lecturer, LYIT)

Eoghan Fureyjpg

Eoghan.Furey@lyit.ie

Kevin Meehan (Lecturer, LYIT)

Kevin Meehan

Kevin.Meehan@lyit.ie

Karen Doherty (Staff)

Karen.Doherty@lyit.ie

 

Webmasters and support

itu

ITU Artificial Intelligence/Machine Learning in 5G Challenge ITU invites you to participate in the ITU Artificial Intelligence/Machine Learning in 5G Challenge, a competition which is scheduled to run from now until the end of the year. Participation in the Challenge is free of charge and open to all interested parties from countries that are members of ITU.

Detailed information about it can be found on the Challenge website, which includes the document "ITU AI/ML 5G Challenge: Participation Guidelines".

Slack

Please click here to access the Slack channel.

References

[ITU-T Y.3172] ITU-T Recommendation "Architectural framework for machine learning in future networks including IMT-2020"
[ITU-T ML5G-I-248] ITU-T FG ML5G draft specification "Requirements, architecture, and design for machine learning function orchestrator"
[ONAP SO] ONAP Service Orchestrator https://wiki.onap.org/download/attachments/22249706/ONAP%20SO.pdf
[ETSI OSM] Open Source MANO: An ETSI-hosted project to develop an Open Source NFV Management and Orchestration (MANO)
[ACUMOS] https://www.acumos.org/ Acumos AI is a platform and open source framework to build, share, and deploy AI apps.
[KOMONDOR] Sergio Barrachina-Mu ̃nozet al., "Komondor: a Wireless Network Simulator for Next-Generation High-Density WLANs," in 2019 WirelessDays (WD). IEEE, 2019, pp. 1–8. (2) (PDF) A Flexible Machine Learning-Aware Architecture for Future WLANs.
[ML5G-I-227] ITU-T FG ML5G ML5G-I-227 "Serving framework for ML models in future networks including IMT-2020"
[ML5G-I-232-R2] ITU-T FG ML5G ML5G-I-232 "Machine Learning Sandbox for future networks including IMT-2020: requirements and architecture framework"