Microsoft is making available a preview of a new Azure service meant to hasten the development of bots using the Microsoft Bot Framework.
Microsoft announced the Azure Bot Service preview on November 15. The service is built on Azure Functions, its serverless compute service, which allows bots to scale on demand. Microsoft’s alternative to Amazon’s Lambda, Azure Functions, is generally available as of today, November 15.
The Azure Bot Service will enable developers to “build, connect, deploy and manage intelligent bots that interact naturally wherever your users are talking - from your app or website to text/sms, Slack, Facebook Messenger, Skype, Teams, Kik, Office 365 mail and other popular services,” said officials in a blog post.
The service will allow devs to build intelligent bots using Microsoft’s Cognitive Services, which are integrated directly into an integrated developer experience built for bot development. The development environment includes out-of-the-box templates that let developers build bots in C# or Node.js directly in the browser, or use an IDE and code editor of their choice.
Microsoft took the wraps off its Bot Framework at the Build conference earlier this year. The Framework consists of three pieces: Bot Builder software development kit (hosted on GitHub) for those interested in building bots using C# or Node.js; the Bot Connector for registering, connecting, publishing, and managing bots to text/SMS, Office 365 mail, Skype, Slack, Telegram, Kik, and more; and a Bot Directory of bots developed with the Bot Framework.
Developers can try out the Azure Bot Service preview starting today.
Microsoft also announced today that OpenAI has chosen Azure as its “preferred cloud platform.” OpenAI is a nonprofit research organization founded by Elon Musk, Sam Altman, Greg Brockman, and Ilya Sutskever to further advances in artificial intelligence.
In other AI-related news, Microsoft officials also said that the Azure N-series of virtual machines, which are the ones designed for the most compute-intensive workloads, including deep learning, simulations, renderings, and training of neural networks, will be generally available starting in December 2016.