Power BI & Artificial Intelligence: How-to Access and Where to Use It?
Artificial Intelligence in Business Analytics is one of the strongest trends there are now. Combining AI with autonomous things is the was chosen by Gartner in their 2019 Strategic Technology Trends report, as the way to go. Power BI, with its level of sophistication, could not keep out of the idea of AI interaction with its environment. So, they’ve incorporated AI into Power BI capabilities. FluentPro’s Power BI consulting services team is also keeping the hand on the pulse and so buy now they’re ready to assist you and your business analytics department with all the new improvements there are to learn and use.
Explaining their new approach, Gartner experts stated that henceforth, analytical systems will not be an advantage, which does not require substantial participation of IT specialists, complicated data loading and modeling procedures, creating a special semantic layer and building a data structure, as well as those that for their complete uses do not need developer input. But so far it has been mainly about the analysis of traditional structured data. Now, the “self-service” metaphor is increasingly being attempted to transfer to the analysis of a wider range of data, including Big Data.
The fact is that for the effective use of big data, along with the IT specialists and developers as mentioned above, it is also necessary to involve data scientists. Sadly, there are still very few of those available now, and all of them are quite costly specialists. Although there are quite a few tools on the market that facilitate their work (Presto, Impala, and Spark SQL for SQL data processing in distributed Hadoop or AtScale repositories, connecting the BI system with the Hadoop cluster), they still cannot satisfy the end users in terms of self-service. In this regard, vendors are increasingly paying attention to the technology of artificial intelligence (AI). Microsoft has integrated Siri voice assistant with AI elements with the server side of Power BI so that end business users can formulate queries to the analytical system in natural language and even voice. (That said, we want to remind that a year ago one of the user-oriented trends was operating Power BI via mobile apps and even Apple Watches.)
Cognitive Services and AI in Power BI
Using Cognitive Services in Power BI is possible via using various algorithms from Azure when preparing the data for data streams yourself. Now users can access the services for analyzing, extracting key phrases, recognizing language and adding tags to the image. Transformations are performed in the Power BI service, which does not require a subscription to Azure Cognitive Services. This feature is available in Power BI Premium.
Cognitive Services support is provided by premium capacity nodes such as EM2, A2, or P1. Capacity uses a separate artificial intelligence workload to run Cognitive Services. During the public preview version, this workload is disabled by default. Before using Cognitive Services in Power BI, you must enable the artificial intelligence workload in the capacity settings in the administration portal. You can enable artificial intelligence workload in the workload section. In this case, you must specify the maximum amount of memory for it. It is recommended to allocate for it no more than 20%. Exceeding this volume will slow down the processing of requests.
Available Cognitive Services and AI features
- Language definition: The language definition function evaluates the entered text and returns the name of the language and its ISO code for each field. This function is useful for data columns with an arbitrary text whose language is unknown. Text analysis API recognizes up to 120 languages.
- Extract key phrases: The function Extract key phrases evaluates unstructured text and returns a list of key phrases for each text field. The input to the function requires a text field or the optional parameter “Cultureinfo.” Extracting key phrases works best with large pieces of text, whereas tonality analysis is more efficient when using small pieces of text.
- Pitch assessment: The Score Sentiment function evaluates the entered text and returns a tonality score for each document in the range from 0 (negative) to 1 (positive). This feature is useful in determining positive and negative opinions on social networks, customer reviews, and forums. The text analysis API uses a machine learning classification algorithm to estimate the tonality in the range from 0 to 1. Close to 1 mark indicate positive tonality, and close to 0 indicate the opposite. The model is pre-trained using a large array of text with different keys. Now the model cannot be trained on their data. During the analysis, the model uses several methods – word processing, analysis of parts of speech, word ordering and the creation of verbal associations.
Tonality analysis is performed on all input data, as opposed to extracting tonality with a specific entity in the text. As practice confirms, the accuracy of the assessment increases when the document contains one or two sentences and not large blocks of text. During the evaluation of objectivity, the model determines whether the specified text is descriptive or contains the key. Over descriptive text, mostly tone analysis is not performed. He receives a score of 0.50 and is no longer processed. At the next stage, the text, which is further processed in the pipeline, is assigned a score higher or lower than 0.50, depending on the detected tonality. The tonality analysis API now supports English, German, Spanish, and French. For other languages, the feature is available in a preview version. For more information, see the article Language and Region Support in the Text Analysis API.
- Adding tags to images: The Tag Images feature allows you to add tags to more than 2000 recognizable objects, living beings, landscapes and actions. If uncertainty arises with the addition of a tag, the meaning of the tag in the context used can be explained in the output. There is no taxonomy of tags or their inheritance hierarchy. The content tag collection is the basis for describing the displayed image in a natural language, set out in sentences.
After downloading an image or specifying its URL, computer output algorithms add tags to it, recognizing objects, living beings, and actions on it. Tags are added not only to the main subject, such as the person in the foreground but also to the environment (interior or exterior), such as furniture, tools, plants, animals, accessories, devices, etc. For this feature to work, the image URL or Base64 format is required as input. Now the function of adding tags to images supports English, Spanish, Japanese, Portuguese and Simplified Chinese. For more information, see the language support section.