This post is a great starting point if you’re new to Azure ML. It will give you a succinct high-level overview of the concepts that make up Azure ML, and provide structure for the technical articles that will follow in this series.
In this post, you’ll learn how to train a simple machine learning model in the cloud, and how to deploy it using a managed endpoint. I assume familiarity with machine learning, but no knowledge of Azure or Azure ML.
In this post, I’ll discuss how to break up your training code into Azure ML components, and how to connect those components into an Azure ML pipeline. I assume that you read my blog post on how to do basic training in Azure ML, or that you have equivalent experience.
This post demonstrates how to mix the three main methods for creating Azure ML resources (CLI, SDK and Studio) in a single project. I recommend that you already have some familiarity with these different methods, either from your own experience or by reading my introductory and basic training posts.
In this post, you’ll learn how to create endpoints on Azure that enable real-time predictions using your custom model. I assume familiarity with machine learning concepts, such as training and prediction, but no knowledge of Azure.
In this post, you’ll learn how to create endpoints on Azure that enable asynchronous batch predictions using your custom model. I assume familiarity with machine learning concepts, such as training and prediction, but no knowledge of Azure.
When creating Azure ML resources you often have to decide which compute to use to run your code. This post will help you understand how to make the right choice for your scenario. I assume some familiarity with Azure ML in order to follow along.
In this post, you’ll learn how to run your Azure ML projects on GitHub Codespaces, a new cloud-based development environment that runs your code in a container. I’ll also cover several best practices for configuring VS Code settings for machine learning projects. I assume some familiarity with Git, GitHub, VS Code, Python, Conda, and Azure ML.
In this post, you’ll learn how to configure your terminal on GitHub Codespaces, ensuring that your remote environment feels as familiar as your local one. The content of this post is useful to anyone using VS Code and GitHub Codespaces, whether you’re working on machine learning projects or not.
This post introduces PyTorch concepts through the creation of a basic neural network using the Fashion MNIST dataset as a data source. I assume that you have a basic conceptual understanding of neural networks, and that you’re comfortable with Python, but I assume no knowledge of PyTorch.
This post provides all the concepts and practical knowledge you need to get started with TensorFlow. We’ll explore Keras, a high-level API released as part of TensorFlow, and we’ll use it to build a basic neural network using the Fashion MNIST dataset as a data source. I assume that you have a basic conceptual understanding of neural networks, and that you’re comfortable with Python, but I assume no knowledge of TensorFlow or Keras.
In this blog post, we’ll re-implement parts of the code from my earlier Keras post, but this time we’ll use lower-level TensorFlow concepts. I assume that you completed my tutorial on Keras or that you have a solid knowledge of Keras, but I assume no knowledge of TensorFlow.
How do PyTorch code and TensorFlow code compare? Maybe you’re in the beginning phases of your machine learning journey and deciding which framework to embrace, or maybe you’re an experienced ML practicioner considering a change of framework. Either way, you’re in the right place. Drawing from my previous posts, I’ll compare the PyTorch and TensorFlow versions of the code used to classify images in the Fashion MNIST dataset.
In this post, I’ll explain how to convert time-series signals into spectrograms and scaleograms. In a future post, we’ll use the images created here to classify the signals. I assume that you have basic math skills and are familiar with basic machine learning concepts.
In this post, I’ll discuss the 2016 paper “Discovering Governing Equations from Data by Sparse Identification of Nonlinear Dynamical Systems” by Brunton et al. I’ll explain the main concepts of the paper in an accessible way, and I’ll show how we can use its novel approach to discover the Lorenz system of equations from data. I assume basic familiarity with ordinary differential equations and dynamical systems.
In this post, I will use the PySINDy Python package to discover a system of ordinary differential equations that best represents my experimental data. I assume that you read my post “Discovering equations from data using SINDy,” and that you have basic familiarity with ordinary differential equations and dynamical systems.