Hierarchical Temporal Memory > 기사제보

본문 바로가기
사이트 내 전체검색


기사제보

광고상담문의

(054)256-0045

평일 AM 09:00~PM 20:00

토요일 AM 09:00~PM 18:00

기사제보
Home > 기사제보 > 기사제보

Hierarchical Temporal Memory

페이지 정보

작성자 CF 작성일25-11-28 07:14 (수정:25-11-28 07:14)

본문

연락처 : CF 이메일 : mozellekime@yahoo.com

pexels-photo-5637850.jpegHierarchical temporal memory (HTM) is a biologically constrained machine intelligence expertise developed by Numenta. Originally described in the 2004 e book On Intelligence by Jeff Hawkins with Sandra Blakeslee, HTM is primarily used in the present day for anomaly detection in streaming information. The expertise is based on neuroscience and the physiology and interplay of pyramidal neurons in the neocortex of the mammalian (particularly, human) mind. At the core of HTM are studying algorithms that can store, be taught, infer, and recall excessive-order sequences. Not like most other machine learning methods, HTM always learns (in an unsupervised course of) time-based mostly patterns in unlabeled knowledge. HTM is strong to noise, and has excessive capability (it could actually be taught a number of patterns concurrently). A typical HTM network is a tree-shaped hierarchy of levels (to not be confused with the "layers" of the neocortex, as described below). These levels are composed of smaller components known as regions (or Memory Wave Experience nodes). A single stage within the hierarchy possibly incorporates a number of areas. Larger hierarchy levels typically have fewer regions.



v2?sig=1fbbff61af1ebdee0f2152939728a858fc9016e187ffbff269e576d3273e47c5Increased hierarchy levels can reuse patterns discovered at the lower ranges by combining them to memorize extra complicated patterns. Each HTM region has the identical primary operate. In learning and inference modes, sensory knowledge (e.g. knowledge from the eyes) comes into backside-stage areas. In generation mode, the underside degree regions output the generated pattern of a given class. When set in inference mode, a region (in every degree) interprets information developing from its "youngster" regions as probabilities of the classes it has in Memory Wave Experience. Every HTM region learns by identifying and memorizing spatial patterns-mixtures of enter bits that always occur at the same time. It then identifies temporal sequences of spatial patterns that are likely to happen one after another. HTM is the algorithmic element to Jeff Hawkins’ Thousand Brains Principle of Intelligence. So new findings on the neocortex are progressively integrated into the HTM model, which adjustments over time in response. The new findings don't necessarily invalidate the previous parts of the model, so ideas from one generation are usually not necessarily excluded in its successive one.



Throughout training, a node (or region) receives a temporal sequence of spatial patterns as its input. 1. The spatial pooling identifies (within the input) frequently observed patterns and memorise them as "coincidences". Patterns which can be significantly similar to one another are treated as the identical coincidence. A lot of possible enter patterns are decreased to a manageable variety of known coincidences. 2. The temporal pooling partitions coincidences which might be more likely to observe each other within the training sequence into temporal teams. Every group of patterns represents a "cause" of the input sample (or "name" in On Intelligence). The ideas of spatial pooling and temporal pooling are still fairly important in the current HTM algorithms. Temporal pooling will not be but effectively understood, and its that means has changed over time (as the HTM algorithms developed). Throughout inference, the node calculates the set of probabilities that a pattern belongs to every recognized coincidence. Then it calculates the probabilities that the enter represents every temporal group.



The set of probabilities assigned to the groups known as a node's "perception" concerning the enter sample. This perception is the results of the inference that is passed to one or more "mother or father" nodes in the following increased level of the hierarchy. If sequences of patterns are just like the coaching sequences, then the assigned probabilities to the groups is not going to change as typically as patterns are acquired. In a more common scheme, the node's belief may be sent to the input of any node(s) at any stage(s), but the connections between the nodes are nonetheless fixed. The higher-degree node combines this output with the output from different youngster nodes thus forming its personal enter sample. Since resolution in space and time is lost in every node as described above, beliefs formed by higher-stage nodes characterize an excellent larger vary of house and time. This is supposed to mirror the organisation of the bodily world as it is perceived by the human mind.

댓글목록

등록된 댓글이 없습니다.


회사소개 광고문의 기사제보 독자투고 개인정보취급방침 서비스이용약관 이메일무단수집거부 청소년 보호정책 저작권 보호정책

법인명 : 주식회사 데일리광장 | 대표자 : 나종운 | 발행인/편집인 : 나종운 | 사업자등록번호 : 480-86-03304 | 인터넷신문 등록번호 : 경북, 아00826
등록일 : 2025년 3월 18일 | 발행일 : 2025년 3월 18일 | TEL: (054)256-0045 | FAX: (054)256-0045 | 본사 : 경북 포항시 남구 송림로4

Copyright © 데일리광장. All rights reserved.