Overview
[ITU-T Y.3172] specified machine learning function orchestrator (MLFO) as an architecture component for integration of AI/ML in future networks including 5G. This is further extended by [ITU-T ML5G-I-248] which presents detailed requirements and APIs for MLFO. The main objective of MLFO is to provide integration, orchestration, and management of pipeline nodes and dependencies in an ML pipeline while reducing operational costs. To handle pipeline dependencies, MLFO can infer relationship between different pipeline node, e.g., training and inference to improve ML pipeline automated model deployment. MLFO offers a unified architecture to facilitate the orchestration of end-to-end ML workflows including data collection, pre-processing, training, model inference, model optimization, and model deployment. It can monitor and evaluate ML pipeline instances to optimize its performance by taking appropriate decisions including model update, retraining, model redeployment, and model chaining, etc. Further, MLFO aims to facilitate an easy integration with the existing ML frameworks including [[ITU-T Y.3173], [ITU-T Y.3174], serving engine [ML5G-I-227], ML sandbox [ML5G-I-234], and ML marketplace [ITU-T Y.ML-IMT2020-MP].
One of the important objectives of MLFO is to hide underlying complexities of orchestrating ML pipeline nodes by providing an abstraction to the users and developers with the help of high-level APIs. Moreover, it targets to address the challenge of running multiple pipeline workflows in parallel and ensure that these pipeline workflows do not affect each other.
MLFO architecture is expected to provide flexibility, reusability, and extension of the ML pipeline to accommodate the rapid pace and development in ML pipeline nodes, e.g., ML models. MLFO can achieve this by splitting ML pipeline with flexibility to reuse the ML pipeline nodes to offer specialized services.
Problem statement
The goal of this challenge is to support a reference implementation of MLFO. Based on the detailed study of multiple use cases, requirements, and reference points as explained in the references, MLFO presents an interesting challenge of its reference implementation.
Considering the progress in open source service orchestration mechanisms e.g. ONAP SO project [ONAP SO], ETSI MANO [ETSI OSM], open source AI/ML marketplaces [ACUMOS] and simulation platforms [KOMONDOR], interesting reference implementations which can prove specific concepts mentioned in the ITU-T specifications are possible. This problem statement is covered under the enabling track.
Specific Concepts:
[ITU-T ML5G-I-248] specifies the following scenarios for MLFO interaction with various other entities:
- Handling ML Intent from operator: this provides a mechanism for operator to input the details of the ML use cases via the ML Intent as specified in [ITU-T Y.3172].
- Control of model management, e.g., selection, training and deployment using MLFO, in coordination with Sandbox and Serving framework.
NOTE- No dataset is required for the model management implementation, only meta-data should suffice.
- Interaction with ML Marketplace.
- Handling of asynchronous trigger operations from different architecture components to the MLFO.