Machine learning allows computers to mimic human behaviour by teaching them historical knowledge. This article identifies several distinct machine learning techniques. It includes the Dynamic temporal warping, Triplet loss, and the Linde-Buzo-Gray Algorithm.

Dynamic time warping

In time series analysis, dynamic time warping (DTW) is an algorithm for figuring out how similar two-time sequences that may move at different speeds are to each other. For example, DTW could find similarities in the way two people walked, even if one person walked faster than the other or if they sped up and slowed down during an observation. DTW has been used to look at temporal sequences of video, audio, and graphics data. We can use DTW to look at any data in a linear sequence. Automatic speech recognition, which works with people who talk at different speeds, is a well-known use. Speaker recognition and online signature recognition are two other uses. We can also use it to match only parts of a shape.

The DTW algorithm finds a discrete match between existing elements of one series and another. In other words, it doesn't let you change the length of time between segments in the sequence. However, other methods make it possible to keep warping. For example, Correlation Optimized Warping (COW) divides the sequence into uniform segments that are scaled in time using linear interpolation to make the best matching warping. By slowing down or speeding up segments, the segment scaling makes it possible for new elements. This process makes the warping more sensitive than DTW's discrete matching of raw elements. Furthermore, in finance and econometrics, dynamic time warping is used to compare the accuracy of a prediction to data from the real world.

Triplet loss

Triplet loss is a loss function for machine learning algorithms in which a reference input (called the anchor). First, minimize the distance between the anchor and the positive input. Then maximize the distance between the anchor and the negative input. In 2003, M. Schultze and T. Joachims developed an early formulation analogous to triplet loss (without the concept of anchors) for metric learning from relative comparisons. 

Triplet loss models build on the distance between two samples with the same labels is smaller than the distance between two samples with different labels. Triplet loss works directly on embedded distances, while t-SNE uses probability distributions to keep the order of embeddings. In addition, Triplet loss has to maintain a series of distance orders simultaneously by optimizing a continuous relevance degree with a chain (i.e., ladder) of distance inequalities. This approach results in the Ladder Loss, which improves the performance of visual-semantic embedding in learning to rank tasks. Furthermore, Triplet loss is one of the loss functions examined for BERT fine-tuning in the SBERT architecture in Natural Language Processing.

Linde-Buzo-Gray Algorithm

The Linde-Buzo-Gray (LBG) Algorithm is to make Codebooks that are easy to use and have the least amount of error and distortion. In 1980, Yoseph Linde, Andres Buzo, and Robert M. Gray came up with the idea for the LBG algorithm. It is the most common algorithm for code generation. It uses a training set to make a codebook with the least amount of mistakes. It's like the k-means method for putting data into groups.

Furthermore, The LBG algorithm presumes a fixed length for the codewords. It is an iterative process, and the basic idea is to divide the group of training vectors and use that to find the most representative vector from one group. The codebook is of these vectors that represent each group well.

Sources of Article

Image source: Nelson Ndongala on Unsplash

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in