A time series is a set of data that is observed sequentially over time. Time series prediction takes observations from previous time steps as input and predicts values at future times. Classic time-series forecasting methods usually make predictions by extrapolating from previous data using statistical models. Such methods often involve making assumptions about the underlying distribution and then decomposing the time series into components such as seasonality, trend, and noise. New machine learning methods make fewer and less strict assumptions about the data. In particular, neural network models often view time series prediction as a sequence-modeling problem, and they have been applied successfully to time series forecasting.
However, building machine learning applications for time series prediction can be a laborious and knowledge-intensive process. To provide an easy-to-use time-series prediction toolkit, we applied automated machine learning (AutoML) to time series prediction. In particular, we automated the process of feature generation, model selection, and hyper-parameter tuning. The toolkit is built on top of Ray*, a distributed framework for advanced AI applications open-sourced by UC Berkeley RISELab, and is provided as a part of Analytic Zoo, which is a unified data analytics and AI platform open-sourced by Intel.
Read this RISELab blog to learn how Intel implemented the scalable AutoML framework and automatic time-series prediction leveraging Ray Tune and RayOnSpark.
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.