JFrog Ltd., the company behind the JFrog Software Supply Chain Platform, announced a new integration with Amazon SageMaker. This new partnership enables businesses to leverage fully managed infrastructure, tools, and workflows to design, train, and deploy machine learning (ML) models for any use case. ML models can be supplied alongside all other software development components in a contemporary DevSecOps workflow by combining JFrog Artifactory with Amazon SageMaker. This ensures that each model is secure, vetted, traceable, and immutable as it becomes ready for release. Additionally, JFrog announced new versioning features for its ML Model management solution, which assist guarantee that security and compliance are included throughout every stage of ML model development.
Kelly Hartman, SVP, Global Channels and Alliances, JFrog, stated, “DevOps team leaders are asking how they can scale data science and ML capabilities to accelerate software delivery without introducing risk and complexity as more companies begin managing big data in the cloud.” In order to instill DevSecOps best practices in ML model creation in the cloud, Artifactory and Amazon SageMaker work together to provide a single source of truth that offers flexibility, speed, security, and peace of mind—thereby opening up new possibilities for MLSecOps.
In a recent Forrester survey, data decision-makers identified data and model security as the gating barrier in 45% of cases, and establishing governance standards within AI/ML in 50% of cases as the largest obstacle to wider adoption. Through the application of DevSecOps best practices to ML model management, JFrog’s Amazon SageMaker integration enables developers and data scientists to expand, expedite, and protect the development of ML projects in a way that is enterprise-grade, safe, and compliant with organizational and regulatory requirements.
With JFrog’s latest Amazon SageMaker connection, businesses can:
Provide data scientists and developers with a single source of truth and make sure all models are easily available, traceable, and impervious to tampering.
Bring machine learning closer to the workflows of software development and production while safeguarding models from erasure or alteration.
Create, hone, and implement machine learning models.
Identify and prevent the use of harmful machine learning models within the company.
Verify ML model licensing compliance with company guidelines and legal requirements by scanning them.
For increased transparency, store in-house developed or internally enhanced machine learning models with strong access controls and versioning histories.
Distribute and bundle machine learning models with every software release.
According to Larry Carvalho, Principal and Founder of RobustCloud, “traditional software development processes and machine learning stand apart, lacking integration with existing tools.” “JFrog Artifactory and Amazon SageMaker work together to offer a unified, regulated machine learning environment from beginning to end. Combining these domains signifies a major step forward in aligning machine learning workflows with accepted software development lifecycles and industry best practices.
JFrog announced new versioning capabilities for its ML Model Management solution in addition to its Amazon SageMaker integration. These capabilities integrate model development into an organization’s DevSecOps workflow and increase transparency around each model version, allowing data scientists, DevOps teams, and developers to make sure the right, secure version of a model is used.
Customers of JFrog and Amazon SageMaker can now access the JFrog connection with SageMaker, which guarantees that any artifacts used by data scientists or for creating machine learning applications are retrieved and stored in JFrog Artifactory.