Friday 24 November 2017

Deep Learning


Deep learning (also known as deep structured learning or hierarchical learning) is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Learning can be supervised, partially supervised or unsupervised
Deep learning is a class of machine learning algorithms that
·         use a cascade of many layers of nonlinear processing units for feature extraction and transformation. Each successive layer uses the output from the previous layer as input. The algorithms may be supervised or unsupervised and applications include pattern analysis (unsupervised) and classification (supervised).
·         are based on the (unsupervised) learning of multiple levels of features or representations of the data. Higher level features are derived from lower level features to form a hierarchical representation.
·         are part of the broader machine learning field of learning representations of data.
·         learn multiple levels of representations that correspond to different levels of abstraction; the levels form a hierarchy of concepts.
·         use some form of gradient descent for training

Thursday 23 November 2017

Data science

Data science, also known as data-driven science, is an interdisciplinary field about scientific methods, processes, and systems to extract knowledge or insights from data in various forms, either structured or unstructured, similar to data mining.


Data science is a "concept to unify statistics, data analysis and their related methods" in order to "understand and analyze actual phenomena" with data.It employs techniques and theories drawn from many fields within the broad areas of mathematicsstatisticsinformation science, and computer science, in particular from the subdomains of machine learningclassificationcluster analysisdata miningdatabases, and visualization.


Facebook - http://bit.ly/2kPS7kt 

Wednesday 22 November 2017

Data mining

Data mining is the computing process of discovering patterns in large data sets involving methods at the intersection of machine learningstatistics, and database systems. It is an essential process where intelligent methods are applied to extract data patterns.It is an interdisciplinary subfield of computer science.


The overall goal of the data mining process is to extract information from a data set and transform it into an understandable structure for further use. Aside from the raw analysis step, it involves database and data management aspects, data pre-processingmodel and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating. Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD 

The actual data mining task is the semi-automatic or automatic analysis of large quantities of data to extract previously unknown, interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection), and dependencies (association rule miningsequential pattern mining). This usually involves using database techniques such as spatial indices. These patterns can then be seen as a kind of summary of the input data, and may be used in further analysis or, for example, in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation, nor result interpretation and reporting is part of the data mining step, but do belong to the overall KDD process as additional steps.


Facebook - http://bit.ly/2kPS7kt 

Tuesday 21 November 2017

Data visualization

Data visualization  is viewed by many disciplines as a modern equivalent of visual communication. It involves the creation and study of the visual representation of data, meaning "information that has been abstracted in some schematic form, including attributes or variables for the units of information


A primary goal of data visualization is to communicate information clearly and efficiently via statistical graphicsplots and information graphics. Numerical data may be encoded using dots, lines, or bars, to visually communicate a quantitative message. Effective visualization helps users analyze and reason about data and evidence. It makes complex data more accessible, understandable and usable. Users may have particular analytical tasks, such as making comparisons or understanding causality, and the design principle of the graphic (i.e., showing comparisons or showing causality) follows the task. Tables are generally used where users will look up a specific measurement, while charts of various types are used to show patterns or relationships in the data for one or more variables.
Data visualization is both an art and a science. It is viewed as a branch of descriptive statistics by some, but also as a grounded theory development tool by others. Increased amounts of data created by Internet activity and an expanding number of sensors in the environment are referred to as "big data" or Internet of things. Processing, analyzing and communicating this data present ethical and analytical challenges for data visualization. The field of data science and practitioners called data scientists help address this challenge.

Facebook - http://bit.ly/2kPS7kt 

Monday 20 November 2017

IT operations analytics

In the fields of information technology (IT) and systems managementIT operations analytics (ITOA) is an approach or method to retrieve, analyze, and report data for IT operations. ITOA may apply big data analytics to large datasets to produce business insights



IT operations analytics (ITOA) (also known as advanced operational analytics, or IT data analytics) technologies are primarily used to discover complex patterns in high volumes of often "noisy" IT system availability and performance data. Forrester Research defined IT analytics as "The use of mathematical algorithms and other innovations to extract meaningful information from the sea of raw data collected by management and monitoring technologies.

Operations research as a discipline emerged from the Second World War to improve military efficiency and decision-making on the battlefield. However, only with the emergence of machine learning tech in the early 2000s could an artificially intelligent operational analytics platform actually begin to engage in the high-level pattern recognition that could adequately serve business needs.
  #bigdata #ai  #machinelearning 

Twitter - https://twitter.com/gvmanalytics
Website - http://gvmanalytics.com/ 
Facebook - http://bit.ly/2kPS7kt 

Sunday 19 November 2017

Neural Network


An Artificial Neural Network (ANN) is a computational model that is inspired by the way biological neural networks in the human brain process information. Artificial Neural Networks have generated a lot of excitement in Machine Learning research and industry, thanks to many breakthrough results in speech recognition, computer vision and text processing. 

Facebook - http://bit.ly/2kPS7kt 

Saturday 18 November 2017

Predictive Analytics


Predictive analytics is the branch of the advancedanalytics which is used to make predictions about unknown future events. Predictive analytics uses many techniques from data mining, statistics, modeling, machine learning, and artificial intelligence to analyze current data to make predictions about future.
Predictive analytics encompasses a variety of statistical techniques from predictive modelingmachine learning, and data mining that analyze current and historical facts to make predictions about future or otherwise unknown events

Friday 17 November 2017

Machine Learning


Machine learning is closely related to (and often overlaps with) computational statistics, which also focuses on prediction-making through the use of computers. It has strong ties to mathematical optimization, which delivers methods, theory and application domains to the field. Machine learning is sometimes conflated with data mining, where the latter subfield focuses more on exploratory data analysis and is known as unsupervised learning. Machine learning can also be unsupervised and be used to learn and establish baseline behavioral profiles for various entities and then used to find meaningful anomalies. 

Facebook - http://bit.ly/2kPS7kt