Hierarchical Temporal Memory
페이지 정보
작성자 CF 작성일25-11-28 07:14 (수정:25-11-28 07:14)관련링크
본문
Hierarchical temporal memory (HTM) is a biologically constrained machine intelligence expertise developed by Numenta. Originally described in the 2004 e book On Intelligence by Jeff Hawkins with Sandra Blakeslee, HTM is primarily used in the present day for anomaly detection in streaming information. The expertise is based on neuroscience and the physiology and interplay of pyramidal neurons in the neocortex of the mammalian (particularly, human) mind. At the core of HTM are studying algorithms that can store, be taught, infer, and recall excessive-order sequences. Not like most other machine learning methods, HTM always learns (in an unsupervised course of) time-based mostly patterns in unlabeled knowledge. HTM is strong to noise, and has excessive capability (it could actually be taught a number of patterns concurrently). A typical HTM network is a tree-shaped hierarchy of levels (to not be confused with the "layers" of the neocortex, as described below). These levels are composed of smaller components known as regions (or Memory Wave Experience nodes). A single stage within the hierarchy possibly incorporates a number of areas. Larger hierarchy levels typically have fewer regions.
Increased hierarchy levels can reuse patterns discovered at the lower ranges by combining them to memorize extra complicated patterns. Each HTM region has the identical primary operate. In learning and inference modes, sensory knowledge (e.g. knowledge from the eyes) comes into backside-stage areas. In generation mode, the underside degree regions output the generated pattern of a given class. When set in inference mode, a region (in every degree) interprets information developing from its "youngster" regions as probabilities of the classes it has in Memory Wave Experience. Every HTM region learns by identifying and memorizing spatial patterns-mixtures of enter bits that always occur at the same time. It then identifies temporal sequences of spatial patterns that are likely to happen one after another. HTM is the algorithmic element to Jeff Hawkins’ Thousand Brains Principle of Intelligence. So new findings on the neocortex are progressively integrated into the HTM model, which adjustments over time in response. The new findings don't necessarily invalidate the previous parts of the model, so ideas from one generation are usually not necessarily excluded in its successive one.
Throughout training, a node (or region) receives a temporal sequence of spatial patterns as its input. 1. The spatial pooling identifies (within the input) frequently observed patterns and memorise them as "coincidences". Patterns which can be significantly similar to one another are treated as the identical coincidence. A lot of possible enter patterns are decreased to a manageable variety of known coincidences. 2. The temporal pooling partitions coincidences which might be more likely to observe each other within the training sequence into temporal teams. Every group of patterns represents a "cause" of the input sample (or "name" in On Intelligence). The ideas of spatial pooling and temporal pooling are still fairly important in the current HTM algorithms. Temporal pooling will not be but effectively understood, and its that means has changed over time (as the HTM algorithms developed). Throughout inference, the node calculates the set of probabilities that a pattern belongs to every recognized coincidence. Then it calculates the probabilities that the enter represents every temporal group.
The set of probabilities assigned to the groups known as a node's "perception" concerning the enter sample. This perception is the results of the inference that is passed to one or more "mother or father" nodes in the following increased level of the hierarchy. If sequences of patterns are just like the coaching sequences, then the assigned probabilities to the groups is not going to change as typically as patterns are acquired. In a more common scheme, the node's belief may be sent to the input of any node(s) at any stage(s), but the connections between the nodes are nonetheless fixed. The higher-degree node combines this output with the output from different youngster nodes thus forming its personal enter sample. Since resolution in space and time is lost in every node as described above, beliefs formed by higher-stage nodes characterize an excellent larger vary of house and time. This is supposed to mirror the organisation of the bodily world as it is perceived by the human mind.
댓글목록
등록된 댓글이 없습니다.

