Anna Shalomova
PPM Consultant
Artificial Intelligence in Business Analytics is one of the most vital trends. Combining AI with autonomous things was chosen by Gartner in their 2019 Strategic Technology Trends report as the way to go. Power BI, with its level of sophistication, could not keep out of the idea of AI interaction with its environment. So, they’ve incorporated AI into Power BI capabilities. FluentPro’s Power BI consulting services team also keeps its hand on the pulse. So now, they’re ready to assist you and your business analytics department with all the new improvements to learn and use.
Explaining their new approach, Gartner experts stated that analytical systems would not be an advantage henceforth. It does not require substantial participation of IT specialists, complicated data loading and modeling procedures, creating a particular semantic layer, and building a data structure, as well as those that, for their complete uses, do not need developer input. But so far, it has been mainly about analyzing traditional structured data. Now, the “self-service” metaphor is increasingly being attempted to transfer to analyzing a wider range of data, including Big Data.
The Role of Data Scientists and AI in Big Data Utilization
For the effective use of big data, along with the IT specialists and developers mentioned above, it is also necessary to involve data scientists. Sadly, there are still very few of those available now, and all of them are pretty costly specialists. However, a few tools on the market facilitate their work. Presto, Impala, and Spark SQL for SQL data processing in distributed Hadoop or AtScale repositories, connecting the BI system with the Hadoop cluster. They still cannot satisfy the end-users in terms of self-service.
In this regard, vendors are increasingly paying attention to artificial intelligence (AI) technology. Microsoft has integrated Siri voice assistant with AI elements with the server side of Power BI. So, end business users can formulate queries to the analytical system in natural language and voice. (That said, we want to remind you that a year ago, one of the user-oriented trends was operating Power BI via mobile apps and even Apple Watches.)
Cognitive Services and AI in Power BI
Using Cognitive Services in Power BI is possible via using various algorithms from Azure when preparing the data for data streams yourself. Now users can access the services for analyzing, extracting key phrases, recognizing the language, and adding tags to the image. Transformations are performed in the Power BI service, which does not require a subscription to Azure Cognitive Services. This feature is available in Power BI Premium.
Cognitive Services support is provided by premium capacity nodes such as EM2, A2, or P1. Capacity uses a separate artificial intelligence workload to run Cognitive Services. During the public preview version, this workload is disabled by default. Before using Cognitive Services in Power BI, you must enable the artificial intelligence workload in the capacity settings in the administration portal. You can enable artificial intelligence workload in the workload section. In this case, you must specify its maximum memory amount. It is recommended to allocate no more than 20% for it. Exceeding this volume will slow down the processing of requests.
Available Cognitive Services and AI features
Language definition:
The language definition function evaluates the entered text and returns the name of the language and its ISO code for each field. This function is helpful for data columns with arbitrary text whose language is unknown. Text analysis API recognizes up to 120 languages.
Extract key phrases:
Extract key phrases evaluates the unstructured text and returns a list of key phrases for each text field. The input to the function requires a text field or the optional parameter “Cultureinfo.” Extracting keyphrases works best with large pieces of text, whereas tonality analysis is more efficient when using small pieces of text.
Pitch assessment:
The Score Sentiment function evaluates the entered text and returns a tonality score for each document from 0 (negative) to 1 (positive). This feature helps determine positive and negative opinions on social networks, customer reviews, and forums. The text analysis API uses a machine-learning classification algorithm to estimate tonality from 0 to 1. Close to 1 mark indicates positive tonality, and close to 0 indicates the opposite. The model is pre-trained using a large array of text with different keys. Now the model cannot be trained on their data. During the analysis, the model uses several methods – word processing, analysis of parts of speech, word order, and creating verbal associations.
Tonality analysis is performed on all input data instead of extracting tonality with a specific entity in the text. As practice confirms, the assessment accuracy increases when the document contains one or two sentences, not large text blocks. During the evaluation of objectivity, the model determines whether the specified text is descriptive or includes the key. Over descriptive text, primarily tone analysis is not performed. He receives a score of 0.50 and is no longer processed. At the next stage, the reader, which is further processed in the pipeline, is assigned a score higher or lower than 0.50, depending on the detected tonality. The tonality analysis API now supports English, German, Spanish, and French. For other languages, the feature is available in a preview version. For more information, see the article Language and Region Support in the Text Analysis API.
Adding tags to images
The Tag Images feature allows you to add tags to more than 2000 recognizable objects, living beings, landscapes, and actions. If uncertainty arises with adding a tag, the meaning of the tag in the context used can be explained in the output. There is no taxonomy of tags or their inheritance hierarchy. The content tag collection is the basis for describing the displayed image in a natural language, set out in sentences.
After downloading an image or specifying its URL, computer output algorithms add tags to it, recognizing objects, living beings, and actions. Tags are added to the main subject, such as the person in the foreground and the environment (interior or exterior), such as furniture, tools, plants, animals, accessories, devices, etc. For this feature to work, the image URL or Base64 format is required as input. Adding tags to images supports English, Spanish, Japanese, Portuguese, and Simplified Chinese. For more information, see the language support section.
Schedule a free consultation
to get help with Power BI today