{"id":1857,"date":"2020-11-04T15:00:14","date_gmt":"2020-11-04T15:00:14","guid":{"rendered":"https:\/\/cog-ist.com\/?post_type=blog_content&#038;p=1857"},"modified":"2025-08-13T21:03:01","modified_gmt":"2025-08-13T21:03:01","slug":"derin-ogrenme-ve-bedenlenme-pietro-perconti-alessio-plebe","status":"publish","type":"blog_content","link":"https:\/\/cog-ist.com\/en\/blog_content\/derin-ogrenme-ve-bedenlenme-pietro-perconti-alessio-plebe\/","title":{"rendered":"Derin \u00d6\u011frenme ve Bedenlenme \u2014 Pietro Perconti, Alessio Plebe"},"content":{"rendered":"<p>\u00d6zg\u00fcn ad\u0131: \u201c<a href=\"http:\/\/www.di.unito.it\/~lieto\/AIC2019\/AIC_2019_paper_6.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Deep Learning and Embodiment<\/a>\u201d<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"5d7a\">\u00d6zet<\/h2>\n\n\n\n<p id=\"2134\">Bedenlenmi\u015f bili\u015f (embodiment) yayg\u0131n bir bili\u015fsel bilim dal\u0131 haline gelmi\u015f ve bir\u00e7ok \u00f6nemli teorik ve deneysel ilerlemeyi beraberinde getirmi\u015ftir. Bedenlenmi\u015f bili\u015fle ilgilenen baz\u0131 bilim insanlar\u0131, eylem ve bedensel duruma ba\u011fl\u0131 bili\u015fin, klasik temsilci (representationalist) ve hesaplamaya dayal\u0131 yakla\u015f\u0131mla k\u0131yaslanamaz oldu\u011funu iddia etmi\u015ftir. Bu arg\u00fcman i\u00e7in, s\u0131kl\u0131kla, g\u00f6rsel alg\u0131lama durumu en iyi \u00f6rnek olarak sunulur. Son y\u0131llarda, \u201cderin \u00f6\u011frenme\u201d (deep learning) ad\u0131 alt\u0131nda toplanan algoritmalar ailesi yapay zek\u00e2y\u0131 k\u00f6kten de\u011fi\u015ftirmi\u015ftir. Bu aileler, makinelerin pek \u00e7ok karma\u015f\u0131k bili\u015fsel g\u00f6revde, \u00f6zellikle de g\u00f6rmede, insans\u0131 performanslara ula\u015fmas\u0131n\u0131 sa\u011flam\u0131\u015ft\u0131r. Bu t\u00fcr sonu\u00e7lara statik g\u00f6r\u00fcnt\u00fc y\u0131\u011f\u0131nlar\u0131n\u0131n \u00f6\u011frenilmesiyle ula\u015f\u0131l\u0131r fakat bu y\u00f6ntem; hareketleri, zamana ba\u011fl\u0131l\u0131\u011f\u0131 ve \u00e7evreyle olan herhangi bir etkile\u015fimi de g\u00f6rmezden gelir. Derin \u00f6\u011frenme modelleri m\u00fchendislik ama\u00e7lar\u0131 i\u00e7in geli\u015ftirilmi\u015fti ve bili\u015fsel bilimi ilerletmek, bu ara\u015ft\u0131rma toplulu\u011funun hedeflerinden biri de\u011fildi. Ancak, derin \u00f6\u011frenme modelinin g\u00f6rme (vision) olay\u0131 konusundaki ba\u015far\u0131lar\u0131 bedenlenmi\u015f (embodied) ve etkile\u015fimci (enactive) bili\u015f alanlar\u0131nda g\u00f6rsel alg\u0131lamaya dair yayg\u0131n varsay\u0131mlara kar\u015f\u0131 \u00e7\u0131kmaktad\u0131r.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"3155\"><strong>1.<\/strong>&nbsp;<strong>Introduction<\/strong><\/h2>\n\n\n\n<p id=\"4497\">20 y\u0131l\u0131 a\u015fk\u0131n s\u00fcredir, bedenlenmi\u015f bili\u015f kavram\u0131 bili\u015fsel bilim alan\u0131nda ilgi oda\u011f\u0131d\u0131r. Kabaca a\u00e7\u0131klamak gerekirse bedenlenmi\u015f bili\u015f, bedenin bili\u015fte oynad\u0131\u011f\u0131 rol\u00fc m\u00fcmk\u00fcn olan birka\u00e7 \u015fekilde vurgulamaktad\u0131r. Bunu yapman\u0131n bir yolu kavram manip\u00fclasyonu ve ak\u0131l y\u00fcr\u00fctme mekanizmalar\u0131n\u0131n, Lakoff ve Johnson\u2019\u0131n [44] da iddia etti\u011fi gibi, bedenimizin davranma ve alg\u0131lama mekanizmalar\u0131ndan ayr\u0131lamamas\u0131 olabilir. Beden, eylemlerin lokasyonu oldu\u011fu i\u00e7in bedenlenmi\u015f bili\u015f, No\u00eb\u2019n\u0131n [56, 53] de ima etti\u011fi gibi, do\u011fal olarak etkile\u015fimci bili\u015fe (enacted cognition) g\u00f6nderme yapar. Asl\u0131nda, beden \u00e7evresiyle etkile\u015fime ge\u00e7er ve bu y\u00fczden bedenlenme, Gibson\u2019\u0131n ekolojik psikoloji kuram\u0131 (ecological psychology) ile ba\u011fda\u015ft\u0131r\u0131l\u0131r [20, 21, 27]. Bedenlenmeye bir di\u011fer \u00f6nemli destek de yapay zek\u00e2dan gelir: Bili\u015fsel kapasitenin sim\u00fcle edilebilmesi i\u00e7in sisteme mekanik bir beden ve etkin kontrollerin eklenmesi gerekir [2, 57]. Bedenlenme kesinlikle bili\u015fsel bilimde esas ilerlemelere katk\u0131 sa\u011flam\u0131\u015ft\u0131r; ama hesaplamaya ve temsile dayal\u0131 zihin kuram\u0131n\u0131n reddedilmesi, bedenlenmenin tart\u0131\u015fmal\u0131 bir taraf\u0131 olmu\u015ftur [19, 5, 31]. Bedenlenmenin (embodiment) \u00e7e\u015fitli y\u00f6nleri hakk\u0131nda daha fazla bilgiden ve zihnin hesaplamaya dayal\u0131 kuram\u0131na kar\u015f\u0131 \u00e7\u0131kmas\u0131 konusundan ikinci ba\u015fl\u0131kta detaylar\u0131yla bahsedilecektir.<\/p>\n\n\n\n<p id=\"b2da\">Biz, bu tart\u0131\u015fmaya yeni ve beklenmedik bir akt\u00f6r\u00fcn ad\u0131m ataca\u011f\u0131n\u0131 iddia etmekteyiz: derin \u00f6\u011frenme (deep learning). Bu terim pek \u00e7ok say\u0131da heyecan verici sonu\u00e7lar biriktiren yapay sinir a\u011f\u0131 (artificial neural network) tekniklerini ifade eder. 2012 y\u0131l\u0131nda, Geoffrey Hinton (derin \u00f6\u011frenmenin mucidi) \u00f6nderli\u011finde Toronto \u00dcniversitesinden bir grup insan \u00e7ok zorlu ve geni\u015f \u00e7apl\u0131 bir g\u00f6rsel s\u0131n\u0131fland\u0131rma (image classification) yar\u0131\u015f\u0131n\u0131 kazand\u0131. Sonras\u0131nda Hinton, derin \u00f6\u011frenmeyi g\u00f6rsel arama motoru i\u00e7in kullanmakta olan Google taraf\u0131ndan davet edildi. 2016 y\u0131l\u0131nda ise, Demis Hassabis taraf\u0131ndan kurulan ve daha sonra da Google taraf\u0131ndan sat\u0131n al\u0131nan DeepMind adl\u0131 \u015firket, satran\u00e7tan daha karma\u015f\u0131k olan bir \u00c7in dama tahtas\u0131 oyunu Go\u2019da d\u00fcnya \u015fampiyonunu yenilgiye u\u011fratt\u0131 [73]. \u00d6nde gelen \u0130nternet \u015firketleri derin \u00f6\u011frenmeyi daha geni\u015f \u00e7apta ilk kez kullananlar aras\u0131ndayd\u0131 [25] ve bu \u015firketler dahili ihtiya\u00e7lar\u0131ndan da \u00f6te bu konuda ara\u015ft\u0131rmaya en fazla yat\u0131r\u0131m yapan \u015firketlerdir.<\/p>\n\n\n\n<p id=\"8919\">Derin n\u00f6ral modeller (deep neural models) nesne tan\u0131ma g\u00f6revlerinde (object recognition tasks) d\u00fczenli olarak insanlardan daha \u00fcst\u00fcn performans sergileyen tek yapay alg\u0131lama sistemidir (artificial perception systems) [80]. Bu performans bedenlenmi\u015f (enacted\/etkile\u015fimci, embedded\/g\u00f6m\u00fcl\u00fc) bili\u015f perspektifinden bak\u0131ld\u0131\u011f\u0131nda endi\u015fe vericidir, \u00e7\u00fcnk\u00fc \u00e7evreyle olan herhangi bir etkile\u015fimi, dinami\u011fi, hareketi yok sayan hesaplamalarla bu performans ger\u00e7ekle\u015ftirilmi\u015ftir. Sonuca varmadan \u00f6nce dikkatli olmak kesinlikle gereklidir \u00e7\u00fcnk\u00fc derin n\u00f6ral modeller (deep neural models) bili\u015f \u00fczerine \u00e7al\u0131\u015fmak i\u00e7in kullan\u0131lan aletler olarak \u00fcretilmemi\u015ftir ve biyolojik olarak ger\u00e7ek\u00e7i modeller de\u011fillerdir (bkz. 4. ba\u015fl\u0131k). Ancak, en az\u0131ndan, bu sonu\u00e7lar g\u00f6rmede bedenlenmi\u015f bili\u015fin i\u00e7erenlerinin, tan\u0131ma g\u00f6revi (recognition task) i\u00e7in \u00f6nemli bir par\u00e7a oldu\u011funu ama ayr\u0131lmaz bir par\u00e7a olmad\u0131\u011f\u0131n\u0131 bizlere d\u00fc\u015f\u00fcnd\u00fcrebilir.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"c0bf\"><strong>2.<\/strong>&nbsp;<strong>Bedenlenmi\u015f Bili\u015f ve \u0130\u015flemselcilik (Computationalism)<\/strong><\/h2>\n\n\n\n<p id=\"e3d8\">Bu hik\u00e2yede genellikle bili\u015fsel bilim kendini iki ad\u0131mda g\u00f6sterir: ilki insan zekas\u0131n\u0131n modellenmesi i\u00e7in hesaplamaya dayal\u0131 tek bir tan\u0131m\u0131n\u0131n (computational account) olmas\u0131na dayanan ve ikincisi insan beyninin asl\u0131nda kendi \u00e7evresiyle etkile\u015fim halinde nas\u0131l \u00e7al\u0131\u015ft\u0131\u011f\u0131n\u0131 anlamay\u0131 ama\u00e7layan daha ekolojik ve biyolojik bir yakla\u015f\u0131m. Erken d\u00f6nem hesaplamal\u0131 psikolojide esas g\u00f6rev bili\u015fsel mimariler, yani baz\u0131 bilgi i\u015fleme cihazlar\u0131n\u0131n verilen bir bili\u015fsel g\u00f6revi yerine getirmek i\u00e7in birbirleriyle nas\u0131l etkile\u015fim kurdu\u011funu g\u00f6steren modeller tasarlamakt\u0131. Ve \u015fimdi bile bili\u015fsel bilim i\u00e7in ana ama\u00e7 bu. Ancak, erken d\u00f6nem hesaplamal\u0131 psikoloji (computational psychology) t\u00fcm bu i\u015flemenin ger\u00e7ekle\u015fti\u011fi maddesel ortam\u0131 dikkate alm\u0131yorken bili\u015fsel bilimin daha yeni bir a\u015famas\u0131 bedenlenmi\u015f (embodied) olacakt\u0131r, yani insan beyninin asl\u0131nda nas\u0131l \u00e7al\u0131\u015ft\u0131\u011f\u0131 ve insan v\u00fccudunun t\u00fcm\u00fcyle \u00e7evreden gelen bilgiyi nas\u0131l kodlad\u0131\u011f\u0131 esas al\u0131nacakt\u0131r. S\u00f6z konusu olan de\u011fi\u015fimdeki d\u00f6n\u00fcm noktalar\u0131 son birka\u00e7 on y\u0131ll\u0131k zaman zarf\u0131nda n\u00f6robilim alan\u0131ndaki inan\u0131lmaz ba\u015far\u0131lar\u0131 i\u00e7erir: James Gibson\u2019\u0131n ekolojik yakla\u015f\u0131m\u0131 (ecological vision approach) [21] ve Paralel Da\u011f\u0131l\u0131ml\u0131 \u0130\u015fleme (PDP) (Parallel Distributed Processing) [68]. Bu ara\u015ft\u0131rma programlar\u0131 zek\u00e2y\u0131 anlaman\u0131n statik bir d\u00fcnyan\u0131n \u00f6n\u00fcndeki soyut ve bedenden ayr\u0131lm\u0131\u015f (disembodied) bir maddeyi g\u00f6r\u00fcnt\u00fcleme konusu de\u011fil; t\u00fcm sahnenin dikkate al\u0131nmas\u0131 konusu oldu\u011funu belirtir. Bu sahnede pek \u00e7ok kolayl\u0131kla donat\u0131lm\u0131\u015f alg\u0131sal senaryolar insan v\u00fccuduna ama\u00e7lar\u0131n\u0131 nas\u0131l ger\u00e7ekle\u015ftirecekleri konusunda fikir verir.<\/p>\n\n\n\n<p id=\"87c7\">Bu t\u00fcr muhakemenin ilk kurban\u0131 tabii ki de mental temsil (mental representation) d\u00fc\u015f\u00fcncesiydi. Bu d\u00fc\u015f\u00fcnce, mental temsillerin (mental representations) \u00f6zellikle bilgi i\u015fleme i\u00e7in d\u00fcnya ve \u00f6zne aras\u0131nda bir t\u00fcr arabulucu oldu\u011fu keskin bir dualizm fikrini \u00f6ne s\u00fcrer. Buna kar\u015f\u0131l\u0131k 4E (embodied\/bedenlenmi\u015f, embedded\/g\u00f6m\u00fcl\u00fc, enactive\/etkile\u015fimci ve extended\/uzat\u0131lm\u0131\u015f) bili\u015f taraftarlar\u0131, bireylerin daha direkt bir yolla \u00e7evredeki di\u011fer \u015feylerle etkile\u015fime ge\u00e7ti\u011fi daha dinamik bir g\u00f6r\u00fc\u015f\u00fc tercih eder. Ancak bu olduk\u00e7a yan\u0131lt\u0131c\u0131. Mental temsil kavram\u0131n\u0131n kendisi, asl\u0131nda bedensel olas\u0131l\u0131klar ve k\u0131s\u0131tlar\u0131n reddini gerektirmez. Bu kavram basit\u00e7e soyut bir kurald\u0131r ve birtak\u0131m \u00e7evresel olaylarla bedensel kodlamay\u0131 (bodily encoding) sistematik bir \u015fekilde birle\u015ftirir. Soyut bir kural\u0131n d\u00fcnya \u00fczerinde nas\u0131l b\u00f6yle bir nedensel rol\u00fc oldu\u011fu zihnin temsil\u00ee i\u015flemleme teorisine g\u00f6re (representational computational theory of mind) en takdir edilen ba\u015far\u0131lardand\u0131r. Ancak mesele hesaplamal\u0131 psikolojinin (computational psychology) bili\u015f ekolojisi hakk\u0131nda tarafs\u0131z olmas\u0131d\u0131r. Hesaplamal\u0131 psikoloji basit\u00e7e, tabiri caizse zihin-beden problemini \u00e7\u00f6zme \u015feklidir; bilginin nas\u0131l \u00e7al\u0131\u015ft\u0131\u011f\u0131n\u0131 anlarken ekolojik ve dinamik fakt\u00f6rleri ay\u0131rmak de\u011fil. E\u011fer i\u015fleyi\u015fini do\u011fru bir \u015fekilde tasarlarsak, mental temsiller bir\u00e7ok 4E bili\u015f savunucusunun da istedi\u011fi gibi dinamik ve ekolojik yap\u0131lar olabilir. Bug\u00fcnlerde n\u00f6robilim ve n\u00f6ral a\u011flar \u00fczerinden bunu yapabilmek m\u00fcmk\u00fcnd\u00fcr [58, 32, 59, 60]. Bu, mental temsillerin ger\u00e7ekten ne oldu\u011funu anlama konusudur, onlar\u0131 sahneden atma konusu de\u011fil. Ve belki de t\u00fcm bunlar\u0131 ak\u0131lda tutarak, bedenlenmi\u015f bili\u015fi klasik hesaplamal\u0131 psikoloji ile ba\u011fda\u015ft\u0131rmak m\u00fcmk\u00fcnd\u00fcr [52]. Bedenlenme (embodiment) ve etkile\u015fimcilikte (enactivism) ger\u00e7ek problem sadece 4E bili\u015f tan\u0131m\u0131n\u0131 benimseyerek do\u011fru bir \u015fekilde bili\u015fle ba\u015fa \u00e7\u0131kabilece\u011fimiz d\u00fc\u015f\u00fcncesidir. Bir ba\u015fka deyi\u015fle, e\u011fer bir bili\u015f mimarisi insans\u0131 bir bi\u00e7imde modellenmezse, ilgilendi\u011fimiz bili\u015f s\u00fcrecini de anlamayabiliriz. Ancak, bu derin \u00f6\u011frenme modellerinin (deep learning models) tam olarak tart\u0131\u015fmaya a\u00e7t\u0131\u011f\u0131 konudur. Temelleri PDP\u2019ye<a href=\"https:\/\/medium.com\/cogist\/derin-%C3%B6%C4%9Frenme-ve-bedenlenme-perconti-plebe-2ebc269b788a#_ftn1\" target=\"_blank\" rel=\"noopener\">[1]<\/a>&nbsp;dayanmakla beraber derin \u00f6\u011frenme modelleri, n\u00f6ral a\u011flar (neural networks) ba\u011flam\u0131nda, bunun \u00f6tesinde bir biyolojik k\u0131s\u0131tlamay\u0131 takip etmez ve sonu\u00e7lar\u0131na sadece matematiksel bir yolla ula\u015f\u0131r; hatta herhangi bir bili\u015fsel kayg\u0131 da g\u00fctmez. Bedenlenmi\u015f bili\u015f i\u00e7in bu bir skandal gibidir: \u0130nsans\u0131 bili\u015fsel performanslar\u0131 ba\u015fka bir \u015fekilde elde edebiliyorsak biyolojik k\u0131s\u0131tlamalar\u0131 ne yapmal\u0131?<\/p>\n\n\n\n<p id=\"eb14\">Benzer bir skandal birka\u00e7 y\u0131l \u00f6nce bili\u015fsel etoloji<a href=\"https:\/\/medium.com\/cogist\/derin-%C3%B6%C4%9Frenme-ve-bedenlenme-perconti-plebe-2ebc269b788a#_ftn2\" target=\"_blank\" rel=\"noopener\">[2]<\/a>&nbsp;(cognitive ethology) alan\u0131nda meydana gelmi\u015ftir. Gordon Gallup\u2019\u0131n [17] i\u015faret testini, bir di\u011fer ad\u0131yla ayna testini, tasarlamas\u0131ndan sonra pek \u00e7ok ba\u015fka etolog hayvanlar\u0131n aynaya yans\u0131yan kendi g\u00f6r\u00fcnt\u00fclerini tan\u0131ma kapasitesini \u00f6l\u00e7mekle me\u015fguld\u00fc. \u0130nsanlar\u0131n yan\u0131 s\u0131ra bu yetene\u011fi g\u00f6steren di\u011fer t\u00fcrlerin de bizlere yak\u0131n olanlar oldu\u011fu sonucu ortaya \u00e7\u0131km\u0131\u015ft\u0131r. En iyi sonu\u00e7lar b\u00fcy\u00fck insans\u0131 maymunlardan elde edilmi\u015ftir. Bunun filogenetik bir nedeninin oldu\u011fu d\u00fc\u015f\u00fcn\u00fclebilir. Kendini tan\u0131ma (self-recognition) kapasitesi i\u00e7in insans\u0131 bir beyne ihtiya\u00e7 vard\u0131r. Bu \u00e7er\u00e7evede, saksa\u011fan ku\u015flar\u0131n\u0131n da, karga familyas\u0131ndan \u00f6t\u00fcc\u00fc bir ku\u015f, kendini tan\u0131ma yetene\u011fine sahip oldu\u011fu ortaya \u00e7\u0131k\u0131nca k\u00fc\u00e7\u00fck bir skandal ba\u015f g\u00f6sterir. \u0130\u015fin ilgin\u00e7 yan\u0131 saksa\u011fanlar primatlardan ve di\u011fer memelilerden \u00e7ok farkl\u0131 bir beyne sahiptir, beyinlerinde neo-korteks, yani, kendini tan\u0131man\u0131n (\u00e7o\u011fu \u00fcst seviye i\u015flemede) meydana geldi\u011fi beynin d\u0131\u015f y\u00fczeyindeki geni\u015f bir alan bulunmaz. Derin \u00f6\u011frenmenin obje tan\u0131ma (object recognition) performanslar\u0131nda oldu\u011fu gibi temkinli olmak zorunday\u0131z. Yak\u0131n zamandaki ara\u015ft\u0131rmalar ku\u015flar\u0131n dorsal telansefallar\u0131ndaki belli n\u00f6ronal h\u00fccre t\u00fcrleri ve memelilerin neokortikal devrelerindeki h\u00fccre t\u00fcrleri aras\u0131nda bir benze\u015fiklik (homology) oldu\u011funu \u00f6ne s\u00fcrmektedir [33, 3]. Ama ne olursa olsun, bedenlenme (embodiment) fanatizmini yat\u0131\u015ft\u0131rmam\u0131z gerekti\u011fi gibi, neokorteks hevesimizi de yat\u0131\u015ft\u0131rmam\u0131z gerekir.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"c9d4\"><strong>3.<\/strong>&nbsp;<strong>Derin N\u00f6ral Modeller<\/strong><\/h2>\n\n\n\n<p id=\"a78c\">Derin n\u00f6ral modeller, ayr\u0131ca derin \u00f6\u011frenme modelleri olarak da bilinir, birka\u00e7 on y\u0131l s\u00fcren yava\u015f ve yetersiz ilerlemelerden sonra yapay zek\u00e2n\u0131n yeniden ortaya \u00e7\u0131k\u0131\u015f\u0131ndan sorumludur [71]. Derin \u00f6\u011frenme b\u00fct\u00fcn varyasyonlar\u0131nda, \u00e7o\u011funlukla insan performans\u0131na yakla\u015farak bir\u00e7ok uygulamada e\u015fi benzeri g\u00f6r\u00fclmemi\u015f bir ba\u015far\u0131 g\u00f6stermi\u015ftir [49]. Derin \u00f6\u011frenme, 80\u2019lerde PDP (parallel distributed processing- paralel da\u011f\u0131l\u0131ml\u0131 i\u015fleme) projesi ile yapay n\u00f6ral a\u011flardan ortaya \u00e7\u0131km\u0131\u015ft\u0131r [68]. \u201cParalel da\u011f\u0131l\u0131ml\u0131\u201dn\u0131n temel yap\u0131s\u0131 ayr\u0131 katmanlar \u015feklinde d\u00fczenlenmi\u015f basit birimlerden olu\u015fur. Her bir katman ve takip eden katman aras\u0131nda tek y\u00f6nl\u00fc ba\u011flant\u0131lar vard\u0131r. \u0130leri bildirim a\u011f\u0131 (feedforward network) olarak bilinen bu yap\u0131 \u00e7o\u011fu derin \u00f6\u011frenme modelinde korunmu\u015ftur. PDP, herhangi bir anlaml\u0131 fonksiyonu sadece deneyimle s\u0131f\u0131rdan \u00f6\u011frenen modellerle yeniden g\u00fc\u00e7l\u00fc bir deneyci a\u00e7\u0131klama kurmu\u015ftur. PDP\u2019nin ba\u015far\u0131s\u0131 \u00e7o\u011funlukla geri yay\u0131l\u0131m (backpropagation) olarak bilinen etkili bir matematik kural\u0131 sayesindedir. Bu kural, \u00f6\u011frenme s\u0131ras\u0131nda g\u00f6sterilen birka\u00e7 girdi \u00e7\u0131kt\u0131ya (input\/output) g\u00f6re n\u00f6ronlar aras\u0131ndaki ba\u011flant\u0131lar\u0131 ayarlar. Derin a\u011flardaki (deep networks) \u00f6\u011frenme matemati\u011fi, PDP modellerindeki matematiksel kural\u0131n evrilmi\u015f ve geli\u015ftirilmi\u015f halidir. Asl\u0131nda Geoffrey Hinton [28] PDP projesinde esas olarak eme\u011fi ge\u00e7enlerden biriydi.<\/p>\n\n\n\n<p id=\"6437\">PDP tarz\u0131ndaki ileri bildirim a\u011f\u0131na (feedforward network) \u201cderin\u201d ifadesinin eklenmesi sadece girdi (input) ve \u00e7\u0131kt\u0131 (output) katmanlar\u0131n\u0131n aras\u0131nda, genellikle \u201csakl\u0131\u201d katmanlar olarak da bilinen katmanlar\u0131n say\u0131s\u0131ndan kaynaklan\u0131r. N\u00f6ral modeller birimlerin say\u0131s\u0131n\u0131 artt\u0131r\u0131lmas\u0131yla giderek daha karma\u015f\u0131k i\u015flevleri \u00f6\u011frenebilir. Ancak bu \u015fekilde, optimizasyon i\u00e7in kullan\u0131lan parametre say\u0131s\u0131 da artar ve \u00f6\u011frenme zorla\u015f\u0131r. \u00d6zellikle, katmanlar\u0131n eklenmesiyle birim say\u0131s\u0131n\u0131n artt\u0131r\u0131lmas\u0131n\u0131n, tek bir sakl\u0131 katman\u0131n geni\u015fli\u011finin artt\u0131r\u0131lmas\u0131ndan daha az etkili oldu\u011fu g\u00f6zlemlenmi\u015ftir [10].<\/p>\n\n\n\n<p id=\"5acb\">Yine Hinton\u2019\u0131n geli\u015ftirmi\u015f oldu\u011fu \u00f6zg\u00fcn bir \u00f6\u011frenme stratejisi, \u00fc\u00e7ten daha fazla say\u0131da katman olmamas\u0131 s\u0131n\u0131r\u0131n\u0131 k\u0131rmada ba\u015far\u0131l\u0131 olmu\u015ftur [29], bu da derin modellerin \u00f6n\u00fcn\u00fc a\u00e7m\u0131\u015ft\u0131r. \u015eu s\u0131ralar, en ba\u015far\u0131l\u0131 \u00f6\u011frenme modeli, klasik geri yay\u0131l\u0131m (backpropagation) modelinden \u00e7ok da farkl\u0131 olmayan stokastik gradyan ini\u015f<a href=\"https:\/\/medium.com\/cogist\/derin-%C3%B6%C4%9Frenme-ve-bedenlenme-perconti-plebe-2ebc269b788a#_ftn3\" target=\"_blank\" rel=\"noopener\">[3]<\/a>&nbsp;(stochastic gradient descent) modelidir [37, 72].<\/p>\n\n\n\n<p id=\"931e\">Yapay n\u00f6ral a\u011flar\u0131n (artificial neural networks) ilk jenerasyonu ve derin n\u00f6ral modeller (deep neural models) aras\u0131nda ama\u00e7 bak\u0131m\u0131ndan temel bir fark vard\u0131r. \u0130lki Rumelhart ve Mcclelland\u2019\u0131n \u201cBili\u015fin Mikro Yap\u0131s\u0131na Dair Ke\u015fifler (Explorations in the Microstructure of Cognition)\u201d kitab\u0131n\u0131n ba\u015fl\u0131\u011f\u0131ndan da anla\u015f\u0131laca\u011f\u0131 \u00fczere, \u201cBili\u015fin Mikro Yap\u0131s\u0131na Dair Ke\u015fifler (Explorations in the Microstructure of Cognition)\u201d yapmak amac\u0131yla geli\u015ftirilmi\u015ftir [68]. Buna kar\u015f\u0131l\u0131k, derin n\u00f6ral a\u011flar m\u00fchendisli\u011fe dair hedeflerle geli\u015ftirilmi\u015ftir. Bu geli\u015fimdeki ana kahramanlar\u0131n \u00e7o\u011fu, \u00f6nceki yapay n\u00f6ral a\u011flar\u0131n geli\u015ftirilmesinde etkili olan isimlerle ayn\u0131 olsa bile bu sefer bili\u015fi ke\u015ffetme gibi bir h\u0131rs ya da ilgi yoktur. Bunun \u00e7arp\u0131c\u0131 \u00f6rne\u011fi yak\u0131n bir zamanda varyasyonel otomatik kodlay\u0131c\u0131 (variational autoencoder) olarak bilinen derin bir modelin icad\u0131d\u0131r [38, 64], ki bu matematiksel form\u00fclasyon bak\u0131m\u0131ndan Friston\u2019\u0131n \u201c\u00f6ng\u00f6r\u00fcc\u00fc beyinler\u201d teorisindeki (predictive brains) serbest enerji ilkesine (free-energy principle) olduk\u00e7a yak\u0131nd\u0131r [12, 13]. Friston\u2019\u0131n teorisi bili\u015fsel bilimde geni\u015f bir yank\u0131 bulmas\u0131na ra\u011fmen b\u00fct\u00fcn varyasyonel otomatik kodlay\u0131c\u0131 (variational autoencoder) savunucular\u0131 ya bu rastlant\u0131n\u0131n fark\u0131nda de\u011filler ya da bu rastlant\u0131ya kar\u015f\u0131 tamamen ilgisizler. Derin \u00f6\u011frenmenin \u00e7o\u011fu bile\u015feni- \u00f6rne\u011fin peki\u015ftirici \u00f6\u011frenme (reinforcement learning) ve kendini tekrarlayan a\u011flar (recurrent networks)- asl\u0131nda uzaktan da olsa PDP k\u00f6kenli oldu\u011fundan n\u00f6robilime ve bili\u015fsel bilime bir \u015feyler bor\u00e7ludur ama bu ba\u011flant\u0131 \u015fimdilik ihmal edilmekte: \u00d6nemli olan tek \u015fey uygulamalar\u0131ndaki pragmatik ba\u015far\u0131lar. Bu ba\u015far\u0131lar, derin modellerin bili\u015fsel bilimin t\u00fcm\u00fcyle veya \u201cgenel\u201d yapay zek\u00e2yla ili\u015fkisine dair baz\u0131 d\u00fc\u015f\u00fcncelere yol a\u00e7acak kadar ses getirmi\u015f durumda. Bunlar bili\u015fsel bilim i\u00e7in \u00f6nemli hususlar ama burada incelenecek konular de\u011fil. Oda\u011f\u0131m\u0131z derin \u00f6\u011frenmenin yapay g\u00f6rme (artificial vision) alan\u0131nda elde etti\u011fi sonu\u00e7lar ve bu sonu\u00e7lar\u0131n bili\u015fsel bilimin konular\u0131ndan olan bedenlenme ile alakas\u0131d\u0131r.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"e2e8\"><strong>4.<\/strong>&nbsp;<strong>Bedenden Ayr\u0131lm\u0131\u015f (Disembodied) G\u00f6rme<\/strong><\/h2>\n\n\n\n<p id=\"7422\">G\u00f6rmeyi (vision), derin \u00f6\u011frenme ile ula\u015f\u0131lan sonu\u00e7lar\u0131n bedenlenmi\u015f bili\u015fe meydan okudu\u011fu durum olarak kabul etmenin birka\u00e7 nedeni vard\u0131r. \u0130lk olarak, ikinci ba\u015fl\u0131kta da g\u00f6r\u00fclece\u011fi \u00fczere g\u00f6rme, bedenlenmi\u015f bili\u015fin desteklenmesinde kullan\u0131lan paradigmatik bir durumdur. G\u00f6rme ayr\u0131ca, g\u00f6rme bilimi camias\u0131 taraf\u0131ndan da tan\u0131nd\u0131\u011f\u0131 \u00fczere derin \u00f6\u011frenmenin en ba\u015far\u0131l\u0131 uygulama alan\u0131d\u0131r [80]:<\/p>\n\n\n\n<p id=\"d910\">Onlarca y\u0131l, alg\u0131lama (perception), biyolojik sistemlere \u00f6zg\u00fc bir beceri olarak d\u00fc\u015f\u00fcn\u00fcld\u00fc, \u00e7ok az ki\u015fi onun mekanizmalar\u0131n\u0131 anlad\u0131 ve yapay sistemlerde de ayn\u0131 alg\u0131lamay\u0131 \u00fcretmek hemen hemen imk\u00e2ns\u0131zd\u0131. Ama bu mevcut durum son y\u0131llarda \u2018derin \u00f6\u011frenme\u2019 yakla\u015f\u0131mlar\u0131 sayesinde geli\u015ftirilen bilgisayar alg\u0131lama modellerindeki etkileyici geli\u015fmelerle tersine d\u00f6nd\u00fcr\u00fcld\u00fc [\u2026]. Hat\u0131rlad\u0131\u011f\u0131m kadar\u0131yla biz alg\u0131 bilimciler, makalelerimizde ve ara\u015ft\u0131rma bursu ba\u015fvurular\u0131m\u0131zda insans\u0131 seviyedeki yapay alg\u0131 sistemlerinin eksikli\u011fini fazlas\u0131yla kendi lehimize kulland\u0131k [\u2026]. Ama \u015fimdi n\u00f6ral a\u011flar [\u2026] d\u00fczenli olarak obje tan\u0131ma g\u00f6revlerinde insanlardan daha \u00fcst\u00fcn performans sergilemektedir[..]. Bahanemiz art\u0131k yok.<\/p>\n\n\n\n<p id=\"135b\">G\u00f6rme konusuna ilgi duymak i\u00e7in bir ba\u015fka neden, bu uygulama i\u00e7in kullan\u0131lan derin modellerin genellikle beyne s\u0131radan katmanl\u0131 n\u00f6ral a\u011flardan daha yak\u0131n g\u00f6r\u00fcnd\u00fc\u011f\u00fc kabul edilen kendine \u00f6zg\u00fc bir mimarisinin olmas\u0131d\u0131r. Bu t\u00fcr mimariye Derin Evri\u015fimli N\u00f6ral A\u011f (DCNN- Deep Convolutional Neural Network) denir, \u00e7\u00fcnk\u00fc evri\u015fim i\u015flemini [67] katmanl\u0131 bir \u00f6\u011frenilmi\u015f yap\u0131 i\u00e7ine entegre eder. Bu strateji ilk olarak Neocognitron<a href=\"https:\/\/medium.com\/cogist\/derin-%C3%B6%C4%9Frenme-ve-bedenlenme-perconti-plebe-2ebc269b788a#_ftn4\" target=\"_blank\" rel=\"noopener\">[4]<\/a>&nbsp;adl\u0131 bir mimaride \u00f6nerilmi\u015ftir [15], ki bu mimaride \u201cneo\u201d daha \u00f6nceki Cognitron\u2019a bir g\u00f6nderme idi [14]. Neocognitron\u2019da S- h\u00fccre tipi birimleriyle C-h\u00fccre tipi birimler birbiri ard\u0131 s\u0131ra dizilirler. H\u00fccre tipleri i\u00e7in kullan\u0131lan bu isimlendirme, Hubel ve Wiesel [30]\u2019\u0131n basit ve karma\u015f\u0131k h\u00fccre s\u0131n\u0131fland\u0131rmas\u0131n\u0131 \u00e7a\u011fr\u0131\u015ft\u0131rmaktad\u0131r. C- birimleri uzamsal ortalama (spatial averaging) ile evri\u015fim sonucunda olu\u015fan g\u00f6r\u00fcnt\u00fcleri alt\u00f6rneklerken (downsample) S- birimleri evri\u015fim \u00e7ekirde\u011fi g\u00f6revi g\u00f6r\u00fcr. G\u00f6r\u00fcnt\u00fc i\u015flemede geleneksel evri\u015fimden \u00e7ok \u00f6nemli fark\u0131 ise Neocognitron\u2019daki \u00e7ekirdeklerin n\u00f6ral a\u011f taraf\u0131ndan kendi kendine \u00f6\u011freniliyor olmas\u0131d\u0131r. Neocognitron\u2019un ilk versiyonu denetimsiz ve kendi kendine organizasyon ile \u00f6\u011frenmi\u015f, bunun i\u00e7in bir \u201ckazanan hepsini al\u0131r (winner-take-all)\u201d stratejisi kullanm\u0131\u015ft\u0131r: Belirli bir alanda yaln\u0131zca en fazla yan\u0131t veren (respond) S-birimleri ile onlara kom\u015fu olan h\u00fccrelerin a\u011f\u0131rl\u0131klar\u0131 modifiye edilmi\u015ftir. Daha sonraki bir s\u00fcr\u00fcm [16] zay\u0131f bir denetleme bi\u00e7imi kulland\u0131: e\u011fitimin ba\u015f\u0131nda S-katman\u0131nda de\u011fi\u015ftirilecek birimler kazanan hepsini al\u0131r y\u00f6ntemi yerine manuel olarak se\u00e7ilir, bu t\u00fcr bir ilk se\u00e7imden sonra e\u011fitim denetimsiz bir \u015fekilde devam eder.<\/p>\n\n\n\n<p id=\"d7ad\">Neocognitron ve PDP projesi aras\u0131ndaki yak\u0131nla\u015fma, Fukushima S- h\u00fccre tipinin iki katman\u0131 ve onlar\u0131 takip eden s\u0131radan PDP n\u00f6ral katmanlar\u0131ndan olu\u015fan bir mimariye geri yay\u0131l\u0131m (backpropagation) uygulanmas\u0131yla ger\u00e7ekle\u015fti [46]. Bu DCNN\u2019e do\u011fru at\u0131lan ilk ad\u0131mlardand\u0131. PDP projesinin yapay n\u00f6ral a\u011flar\u0131 gibi, Neocognitron ve geri yay\u0131l\u0131m\u0131n (backpropagation) bu kar\u0131\u015f\u0131m\u0131 \u00f6zellikle karakter tan\u0131ma [47] alan\u0131nda g\u00f6rece iyi bir ba\u015far\u0131 elde etti ama ana ak\u0131m bilgisayarl\u0131 g\u00f6rmede (computer vision) ilk se\u00e7im de\u011fildi. DCNN\u2019lerin, bir kez daha Hinton\u2019\u0131n PhD \u00f6\u011frencisi Krizhevsky ile \u00e7al\u0131\u015fmas\u0131 [42] sayesinde, s\u0131radan katmanl\u0131 a\u011flar gibi \u201cderin\u201d olmas\u0131yla b\u00fcy\u00fck bir de\u011fi\u015fim meydana geldi. Bu model bilgisayarla g\u00f6rme alan\u0131ndaki en b\u00fcy\u00fck yar\u0131\u015fma olan \u201cImageNet B\u00fcy\u00fck \u00d6l\u00e7ekli G\u00f6rsel Tan\u0131ma Yar\u0131\u015fmas\u0131\u2019n\u0131 domine etti. Model, ImageNet Yar\u0131\u015fmas\u0131n\u0131n hata oran\u0131n\u0131 y\u00fczde 26\u2019dan y\u00fczde 16.4\u2019e d\u00fc\u015f\u00fcrd\u00fc. Bu ilk ba\u015far\u0131 bilgisayarl\u0131 g\u00f6rmeyi DCNN\u2019lere y\u00f6neltti ve hatta birka\u00e7 d\u00fczeltme insan deneklerininkini de a\u015farak performanslar\u0131 geli\u015ftirmeye devam etti [63]. Bu sonucun nas\u0131l elde edildi\u011fini ayr\u0131nt\u0131l\u0131 olarak g\u00f6rmek buradaki ama\u00e7lar\u0131m\u0131z a\u00e7\u0131s\u0131ndan olduk\u00e7a \u00f6nemlidir. ImageNet, her leksikal maddenin y\u00fczlerce g\u00f6r\u00fcnt\u00fc ile ili\u015fkilendirildi\u011fi bir leksikal s\u00f6zl\u00fck olan WordNet\u2019teki isimlerin hiyerar\u015fisine g\u00f6re organize edilmi\u015f bir g\u00f6r\u00fcnt\u00fc veritaban\u0131d\u0131r [69]. G\u00f6rsel Tan\u0131ma Yar\u0131\u015fmas\u0131, ImageNet\u2019in 1000 farkl\u0131 kategoriden olu\u015fan bir alt k\u00fcmesini kullanmakta. Bu 1000 kategorinin her biri WordNet\u2019teki e\u015f anlaml\u0131 kelime dizilerinden (synset) birine kar\u015f\u0131l\u0131k gelir. Her alt k\u00fcme kabaca 1000 g\u00f6r\u00fcnt\u00fcden olu\u015fur. Yakla\u015f\u0131k 1.2 milyon g\u00f6r\u00fcnt\u00fc modelleri e\u011fitmek i\u00e7in,150 bin g\u00f6r\u00fcnt\u00fc de test etmek i\u00e7in kullan\u0131l\u0131r. DCNN modellerine, bilinen kategorileriyle birlikte e\u011fitim g\u00f6r\u00fcnt\u00fcleri birka\u00e7 kez g\u00f6sterilir. T\u00fcm g\u00f6r\u00fcnt\u00fcler 256\u00d7256 b\u00fcy\u00fckl\u00fc\u011f\u00fcndedir. Model daha fazla bilgiden habersiz: Ne her bir g\u00f6r\u00fcnt\u00fcn\u00fcn ba\u011flam\u0131 hakk\u0131nda, ne kategoriler aras\u0131ndaki ili\u015fkiler hakk\u0131nda, ne her objenin uzayda \u00fcstlenebilece\u011fi duru\u015flar hakk\u0131nda, ne objeler taraf\u0131ndan sa\u011flanan olanaklar hakk\u0131nda, ne de objelerin zaman i\u00e7erisinde g\u00f6r\u00fcn\u00fc\u015flerini nas\u0131l de\u011fi\u015ftirebilece\u011fi hakk\u0131nda bir bilgisi var. \u00d6zetle model, tamamen bedenden ayr\u0131lm\u0131\u015f (disembodied) bi\u00e7imde objeleri tan\u0131may\u0131 \u00f6\u011frenir.<\/p>\n\n\n\n<p id=\"e34e\">Bilgisayarla g\u00f6rme toplulu\u011funun bedenlenmi\u015f bili\u015ften [70, 56, 57] \u00f6\u011frendi\u011fi ana ders, bir g\u00f6r\u00fcnt\u00fcy\u00fc statik bir g\u00f6rev olarak anlamaya \u00e7al\u0131\u015fman\u0131n umutsuz oldu\u011fuydu. Ancak, g\u00f6rme etkile\u015fimli bir s\u00fcre\u00e7 olarak ele al\u0131nd\u0131\u011f\u0131nda, yap\u0131c\u0131n\u0131n (agent) \u00e7evreyle etkile\u015fime ge\u00e7mesi sonucunda \u00e7ok daha kolay bir hale gelir. H\u00e2lbuki bedenlenmi\u015f ve etkile\u015fimci (enactive) sistemlerdeki yapay g\u00f6rme, DCNN modellerinin performans\u0131na hi\u00e7bir \u015fekilde yakla\u015famad\u0131. \u00c7o\u011fu kez aktif g\u00f6rme modelleri \u00e7ok basitle\u015ftirilmi\u015f olan kolay g\u00f6revler i\u00e7in geli\u015ftirilmi\u015ftir, \u00f6rne\u011fin sadece daire ve elmastan olu\u015fan iki kategori [1] ya da kedi, k\u00f6pek, z\u00fcrafa ve attan olu\u015fan d\u00f6rt kategori [81]. En iyi aktif g\u00f6rme modelleri kar\u015f\u0131la\u015ft\u0131r\u0131ld\u0131\u011f\u0131nda [9], 100 farkl\u0131 bireysel obje setindeki hata oran\u0131 y\u00fczde 40 civar\u0131nda ya da daha k\u00f6t\u00fcyd\u00fc.<\/p>\n\n\n\n<p id=\"4816\">Bu makale derin \u00f6\u011frenmenin avantajlar\u0131n\u0131 ve dezavantajlar\u0131n\u0131 kapsamamaktad\u0131r, bunu belirtmemizde fayda var. 3. b\u00f6l\u00fcmde bunun i\u00e7in referanslar sa\u011flad\u0131k. D\u00fc\u015f\u00fcncelerimiz derin \u00f6\u011frenmenin g\u00f6rmede, var olan di\u011fer t\u00fcm metotlar kar\u015f\u0131s\u0131ndaki b\u00fcy\u00fcleyici avantaj\u0131n\u0131n deneye dayal\u0131 g\u00f6zleminden gelir.<\/p>\n\n\n\n<p id=\"ada1\">Hala, birisi DCNN modellerinin do\u011fal g\u00f6rmenin \u00e7al\u0131\u015fma \u015feklinden olduk\u00e7a uzak \u015fekilde tasarlanm\u0131\u015f bir yaz\u0131l\u0131m oldu\u011fu ve bu nedenle bili\u015fsel tezleri de\u011ferlendirmek i\u00e7in kullan\u0131lamayaca\u011f\u0131 \u015feklinde bir itirazda bulunabilir. Asl\u0131nda, derin \u00f6\u011frenme modellerindeki \u201cn\u00f6ronlar\u201d biyolojik kuzenlerine \u00e7ok az benzerler. Ancak, yak\u0131n d\u00f6nemdeki \u00e7al\u0131\u015fmalar evri\u015fimli n\u00f6ral model katmanlar\u0131ndaki aktivasyon \u00f6r\u00fcnt\u00fcleri ve ayn\u0131 g\u00f6r\u00fcnt\u00fcleri g\u00f6ren deneklerdeki voksel<a href=\"https:\/\/medium.com\/cogist\/derin-%C3%B6%C4%9Frenme-ve-bedenlenme-perconti-plebe-2ebc269b788a#_ftn5\" target=\"_blank\" rel=\"noopener\">[5]<\/a>&nbsp;\u00f6r\u00fcnt\u00fcleri aras\u0131nda \u015fa\u015f\u0131rt\u0131c\u0131 benzerlikler ortaya \u00e7\u0131karm\u0131\u015ft\u0131r. DCNN sonu\u00e7lar\u0131n\u0131 g\u00f6rsel sistemle ili\u015fkilendirmeye y\u00f6nelik ilk te\u015febb\u00fcslerden biri, belli bir yapay a\u011f modeli seviyesine voksel uzay\u0131nda voksellerin tepkisini tahmin eden bir katman eklemeye ve bu katman\u0131 g\u00f6r\u00fcnt\u00fc setleri ve bu g\u00f6r\u00fcnt\u00fclere kar\u015f\u0131l\u0131k gelen fMRI sonu\u00e7lar\u0131yla e\u011fitmeye dayan\u0131yordu [23]. Bu metot kullan\u0131larak, AlexNet\u2019e [4] \u00e7ok benzer bir model fMRI verisiyle [24] kar\u015f\u0131la\u015ft\u0131r\u0131ld\u0131. Bu kar\u015f\u0131la\u015ft\u0131rmada model ile fMRI aras\u0131ndaki harita (e\u015fle\u015ftirme), 1750 g\u00f6r\u00fcnt\u00fc ve onlar\u0131n voksel kar\u015f\u0131l\u0131klar\u0131yla e\u011fitilerek olu\u015fturuldu. Model yan\u0131tlar\u0131, V1<a href=\"https:\/\/medium.com\/cogist\/derin-%C3%B6%C4%9Frenme-ve-bedenlenme-perconti-plebe-2ebc269b788a#_ftn6\" target=\"_blank\" rel=\"noopener\">[6]<\/a>&nbsp;alanlar\u0131 i\u00e7in 0.5\u2019ten \u00e7ok az daha d\u00fc\u015f\u00fck, LO (lateral oksipital korteks) alan\u0131 i\u00e7inse 0.3\u2019ten \u00e7ok az daha d\u00fc\u015f\u00fck bir oranla, g\u00f6rsel korteksteki vokseller i\u00e7in \u015fans denilemeyecek kadar \u00f6ng\u00f6r\u00fcc\u00fc nitelikteydi. Ayn\u0131 teknikten ayr\u0131ca, retinotopi veya y\u00fcz\/mek\u00e2n kontrast\u0131 gibi klasik g\u00f6rme deneylerinde kullan\u0131lan uyar\u0131c\u0131lar\u0131 kullanarak yapay fMRI verisi \u00fcretilmesinde faydalan\u0131ld\u0131 ve yapay fMRI sonu\u00e7lar\u0131 ile DCNN aras\u0131nda \u00f6nemli ortakl\u0131klar bulundu.<\/p>\n\n\n\n<p id=\"d77d\">Sentetik fMRI verisinin kullan\u0131m\u0131 \u00f6nceki \u00e7al\u0131\u015fmalardan da geni\u015f kapsaml\u0131 bilgi toplayarak y\u00fcksek g\u00f6rsel korteksteki aktivitenin istatiksel bir modelini olu\u015fturmaya dayanan farkl\u0131 bir strateji ile devam etti [36]. Bu model deneysel ama\u00e7lar i\u00e7in ihtiya\u00e7 duyuldu\u011funda yeni yan\u0131tlar\u0131n eklenmesine izin verir. Bu metodu kullanan Bryan Tripp [77] kortikal yan\u0131tlarla ve DCNN modelleri aras\u0131ndaki benzerlikleri \u00e7e\u015fitli \u00f6zellikler \u00fczerine test edebilmi\u015ftir: n\u00fcfus seyrekli\u011fi, y\u00f6nelim (orientation), boyut ve konum ayar\u0131, t\u0131kanma, da\u011f\u0131n\u0131kl\u0131k vb. Test edilen DCNN\u2019ler, AlexNet [42] ve VGG- 16 [74] idi.<\/p>\n\n\n\n<p id=\"07ad\">DCNN modelleri ve fMRI sonu\u00e7lar\u0131n\u0131 kar\u015f\u0131la\u015ft\u0131rabilme i\u00e7in alternatif bir metot, Nikolaus Kriegeskorte taraf\u0131ndan tan\u0131t\u0131lan temsili benzerlik analizidir (representational similarity analysis) [41, 40]. Bu metot herhangi bir uyarana verilen herhangi bir da\u011f\u0131t\u0131k (distributed) tepkiye uygulanabilir. Metot, her bir uyaran \u00e7ifti aras\u0131ndaki korelasyonu 1\u2019den \u00e7\u0131kart\u0131r. Ortaya \u00e7\u0131kan matris uyaranlar kategorik benzerliklerine g\u00f6re grupland\u0131r\u0131ld\u0131\u011f\u0131nda bilhassa bilgilendiricidir. Asl\u0131nda t\u00fcm fikir \u015fudur: Uyaranlara verilen tepkiler \u00f6yle bir uzay\u0131 yans\u0131tmal\u0131 ki, bu uzaydaki tepkilerin aras\u0131ndaki kar\u015f\u0131l\u0131kl\u0131 ili\u015fkiler, uyaranlar aras\u0131ndaki ili\u015fkiye kar\u015f\u0131l\u0131k gelsin. Bu tam olarak, bili\u015fsel bilimin temel konseptlerinden olan yap\u0131sal temsiller (structural representations) fikridir [76,54,60]. Temsili benzerlik analizi, insanlarda fMRI ile \u00f6l\u00e7\u00fclen y\u00fcksek g\u00f6rsel korteksteki yan\u0131tlar\u0131n ve maymunlarda h\u00fccre kayd\u0131 ile \u00f6l\u00e7\u00fclen yan\u0131tlar\u0131n birka\u00e7 yapay modelle \u00f6l\u00e7\u00fclen yan\u0131tlar ile kar\u015f\u0131la\u015ft\u0131r\u0131lmas\u0131nda uygulan\u0131r [35]. Bu \u00e7al\u0131\u015fma \u00e7ok ilgin\u00e7 \u00e7\u00fcnk\u00fc AlexNet\u2019in yan\u0131 s\u0131ra biyolojik olarak daha uygun olan birka\u00e7 model i\u00e7erir.<\/p>\n\n\n\n<p id=\"ec72\">Biyolojik olarak en uygun model be\u015f katmana ayr\u0131lm\u0131\u015f VisNet\u2019tir [83, 66]. B\u00f6ylece ba\u011flant\u0131sall\u0131k (connectivity) V1, V2, V4, posterior inferior temporal korteks ve inferior temporal korteks\u2019in al\u0131c\u0131 (receptive) alanlar\u0131n\u0131n b\u00fcy\u00fckl\u00fcklerine yakla\u015f\u0131r. A\u011f, Hebbian kural\u0131ndan [26] ortaya \u00e7\u0131kar\u0131lan sinaptik modifikasyonlar arac\u0131l\u0131\u011f\u0131yla denetimsiz kendi kendini y\u00f6netme ile \u00f6\u011frenir [84]. VisNet tamamen bedenlenmi\u015f ve etkile\u015fimci bili\u015fe ba\u011fl\u0131 de\u011fildir ama bir objeye do\u011fru hareket ediyorken ya da obje hareket ediyorken objenin alg\u0131lanmas\u0131n\u0131 biyolojik olarak uygun bir modelin i\u00e7ine d\u00e2hil etmeye \u00e7al\u0131\u015f\u0131r. Bu ama\u00e7la \u00f6\u011frenme, spesifik bir mekanizma gerektirir. Bu mekanizma bellek izi (trace memory) olarak adland\u0131r\u0131l\u0131r \u00e7\u00fcnk\u00fc bir h\u00fccrenin \u00f6\u011frenmesi, \u00f6nceki h\u00fccre aktivitesinin azalmakta olan izinden etkilenir. Bu kural, objeler hareket ederken onlara bir s\u00fcr\u00fc farkl\u0131 a\u00e7\u0131dan bakmaya dayal\u0131 bir e\u011fitimle, modelin her seferinde objeyi tan\u0131may\u0131 \u00f6\u011frenmesini sa\u011flar; dolay\u0131s\u0131yla g\u00f6rmenin bedenlenmi\u015f ve etkile\u015fimli bile\u015fenini tekrar ortaya \u00e7\u0131karmay\u0131 ama\u00e7lar.<\/p>\n\n\n\n<p id=\"12c4\">Kriegeskorte ve i\u015f arkada\u015flar\u0131 \u00e7e\u015fitli canl\u0131 ve cans\u0131z kategorileri kapsayan do\u011fal g\u00f6r\u00fcnt\u00fc setleri \u00fczerine birka\u00e7 temsili benzerlik matrisi olu\u015fturdular [35] ve inferior temporal korteksi (IT) temsil eden vokselleri modellerle (\u00e7al\u0131\u015fma asl\u0131nda 37 farkl\u0131 modeli kar\u015f\u0131la\u015ft\u0131rm\u0131\u015f, bu makalede yaln\u0131zca AlexNet ve VisNet\u2019e odaklan\u0131lm\u0131\u015ft\u0131r) kar\u015f\u0131la\u015ft\u0131rd\u0131lar. Analiz AlexNet\u2019in VisNet\u2019e g\u00f6re canl\u0131\/cans\u0131z kategori ayr\u0131m\u0131n\u0131n IT yap\u0131sal temsiline daha benzer oldu\u011funu ortaya \u00e7\u0131karm\u0131\u015ft\u0131r.<\/p>\n\n\n\n<p id=\"f3ee\">DCNN ve g\u00f6rsel sistem aras\u0131ndaki benzerlikler \u00fczerine yap\u0131lan bu \u00e7al\u0131\u015fmalar, g\u00f6rsel n\u00f6robilim camias\u0131nda derin \u00f6\u011frenme modellerinin kendi bilimsel ama\u00e7lar\u0131na uygunlu\u011fu \u00fczerine engin bir tart\u0131\u015fmaya yol a\u00e7t\u0131. G\u00f6r\u00fc\u015fler; \u00e7o\u011funlukla pozitif kabul etme durumundan [18, 80] dikkatli bir ilgiye [48, 22] ve daha \u015f\u00fcpheci duru\u015flara [55, 65, 8] kadar \u00e7e\u015fitlenir. Kesinlikle g\u00f6rsel sistemin bir DCNN modelinden \u015fiddetle ayr\u0131lan \u00e7ok say\u0131da yap\u0131sal \u00f6zelli\u011fi var. Birka\u00e7\u0131ndan bahsetmek gerekirse: Korteksteki g\u00f6rsel haritalar\u0131n bir\u00e7ok g\u00fc\u00e7l\u00fc ara ba\u011flant\u0131lar\u0131 (interconnection) ve pek \u00e7ok say\u0131da da daha zay\u0131f ba\u011flant\u0131lar\u0131 vard\u0131r [79, 78, 51]; al\u0131c\u0131 alan (receptive field) b\u00fcy\u00fckl\u00fckleri bir kortikal haritada de\u011fi\u015fir ve de\u011fi\u015fimlerin derecesi y\u00fcksek kortikal alanlarda daha b\u00fcy\u00fck orandad\u0131r [34]; al\u0131c\u0131 alanlar (receptive fields) ayr\u0131ca g\u00f6revlerle s\u00fcrekli d\u00fczenlenir [39]; hareket alanlar\u0131na ek olarak sahne dinamikleri de tan\u0131ma alanlar\u0131n\u0131 etkiler [75]. Ve her \u015feyden \u00f6nemlisi, beyindeki g\u00f6rsel sistemin bedenlenmi\u015f ve etkile\u015fimci oldu\u011fu kesinlikle do\u011frudur [6, 62]. Fakat h\u00e2l\u00e2 esas nokta, DCNN\u2019in ayn\u0131 zamanda g\u00f6rmede insan performanslar\u0131 sergilemeyi ba\u015faran ve ayn\u0131 g\u00f6rsellere bakan deneklerin beyin aktivasyonuna benzerlik g\u00f6steren tek model oldu\u011fudur. Bu ger\u00e7ek, \u00f6zellikle bilginin lokal \u00f6r\u00fcnt\u00fclerini hesaplamada \u00e7al\u0131\u015fan ve sahnenin i\u00e7eri\u011fini ay\u0131rt etme i\u00e7in gerekli olan merkez (core) bir i\u015flemenin oldu\u011funu g\u00f6sterebilir. Bu lokal i\u015flem \u00e7evresel ve bedensel ipu\u00e7lar\u0131ndan g\u00f6rece ba\u011f\u0131ms\u0131zm\u0131\u015f gibi duruyor.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"2b3c\"><strong>5.<\/strong>&nbsp;<strong>Sonu\u00e7<\/strong><\/h2>\n\n\n\n<p id=\"fdf2\">G\u00f6rsel \u00f6r\u00fcnt\u00fc tan\u0131mada derin \u00f6\u011frenme modelleriyle elde edilen performans, 4E bili\u015f tan\u0131m\u0131ndan ve bilhassa bedenlenmeye dair endi\u015felerden son derece etkilenmi\u015f olan \u00e7a\u011fda\u015f bili\u015fsel bilimde olduk\u00e7a beklenmedik bir durumdur. Asl\u0131nda bunun olmamas\u0131 gerekir. Ana ak\u0131m beklentilere ra\u011fmen bu durumun ya\u015fand\u0131\u011f\u0131 ger\u00e7e\u011fi klasik bili\u015fsel bilim ve bedenlenme kayg\u0131lar\u0131 aras\u0131ndaki kutupla\u015fman\u0131n ne kadar yanl\u0131\u015f y\u00f6nlendirici oldu\u011funu g\u00f6sterir. Yukar\u0131da bahsedildi\u011fi gibi, hesaplamal\u0131 psikoloji ve klasik mental temsiller ekolojik a\u00e7\u0131dan n\u00f6trlerdir. Ayr\u0131ca, bili\u015fsel bilimin geli\u015fiminde bir t\u00fcr mikro-tekillik (micro-singularity) ile, yani yapay zek\u00e2n\u0131n insan zek\u00e2s\u0131n\u0131 ge\u00e7ti\u011fi bir durumla kar\u015f\u0131 kar\u015f\u0131yay\u0131z: Hem bili\u015fsel bilim hem de sa\u011fduyu i\u00e7in yapay zek\u00e2 alan\u0131ndan kaynaklanan yeni bir g\u00fc\u00e7l\u00fck. Bu belki de hik\u00e2yenin k\u0131ssadan hissesi: Bizler, biyolojiden etkilenmi\u015f ve bedenlenmi\u015f bili\u015fsel mimarilerin ba\u015far\u0131l\u0131 bir i\u015flemleme i\u00e7in garanti olmayaca\u011f\u0131n\u0131 kabul edebilmek ad\u0131na \u015fimdiki teorik beklentilerimizi g\u00f6zden ge\u00e7irmeliyiz.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"5efa\"><strong>Kaynak\u00e7a<\/strong><\/h2>\n\n\n\n<p id=\"776f\">1. Beer, R.D.: The dynamics of active categorical perception in an evolved model agent. Adaptive Behavior 11, 209\u2013243 (2003)<br>2. Brooks, R.A.: Intelligence without representation. Artificial Intelligence 47, 139\u2013159 (1991)<br>3. Calabrese, A., Woolley, S.M.N.: Coding principles of the canonical cortical microcircuit in the avian brain. Proceedings of the Natural Academy of Science USA 112, 3517\u20133522 (2015)<br>4. Chatfield, K., Simonyan, K., Vedaldi, A., Zisserman, A.: Return of the devil in the details: Delving deep into convolutional nets. CoRR abs\/1405.3531 (2014)<br>5. Chemero, A.: Radical embodied cognitive science. MIT Press, Cambridge (MA) (2009)<br>6. Churchland, P.S., Ramachandran, V., Sejnowski, T.: A critique of pure vision. In: Koch, C., Davis, J. (eds.) Large-Scale Neuronal Theories of the Brain. MIT Press, Cambridge (MA) (1994)<br>7. Cichy, R.M., Kaiser, D.: Deep neural networks as scientific models. Trends in Cognitive Sciences 23, 305\u2013317 (2019)<br>8. Conway, B.R.: The organization and operation of inferior temporal cortex. Annual Review of Vision Science 4, 19.1\u201319.22 (2018)<br>9. De Croon, G.C., Sprinkhuizen-Kuyper, I.G., Postma, E.: Comparing active vision models. Image and Vision Computing 27, 374\u2013384 (2009)<br>10. de Villers, J., Barnard, E.: Backpropagation neural nets with one and two hidden layers. IEEE Transactions on Neural Networks 4, 136\u2013141 (1992)<br>11. Eickenberg, M., Gramfort, A., Varoquaux, G., Thirion, B.: Seeing it all: Convolutional network layers map the function of the human visual system. NeuroImage 152, 184\u2013194 (2017)<br>12. Friston, K., Kilner, J., Harrison, L.: A free energy principle for the brain. Journal of Physiology \u2014 Paris 100, 70\u201387 (2006)<br>13. Friston, K., Stephan, K.E.: Free\u2013energy and the brain. Synthese 159, 417\u2013458 (2007)<br>14. Fukushima, K.: Cognitron: a self-organizing multilayered neural network. Biological Cybernetics 20, 121\u2013136 (1975)<br>15. Fukushima, K.: Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics 36, 193\u2013202 (1980)<br>16. Fukushima, K.: Neocognitron: a hierarchical neural network capable of visual pattern recognition. Neural Networks 1, 119\u2013130 (1988)<br>17. Gallup, G.: Chimpanzees: Self-recognition. Science 167, 86\u201387 (1970)<br>18. Gauthier, I., Tarr, M.J.: Visual object recognition: Do we (finally) know more now than we did? Annual Review of Vision Science 2, 16.1\u201316.20 (2016)<br>19. Gelder, T.v.: What might cognition be, if not computation? Journal of Phylosophy 91, 345\u2013381 (1995)<br>20. Gibson, J.J.: The senses considered as perceptual systems. Houghton Miflin, Boston (MA) (1966)<br>21. Gibson, J.J.: The Ecological Approach to Perception. Houghton Miflin, Boston (MA) (1979)<br>22. Grill-Spector, K., Weiner, K.S., Gomez, J., Stigliani, A., Natu, V.S.: The functional neuroanatomy of face perception: from brain measurements to deep neural networks. Interface Focus 8, 20180013 (2018)<br>23. G\u00a8u\u00b8cl\u00a8u, U., van Gerven, M.A.J.: Unsupervised feature learning improves prediction of human brain activity in response to natural images. PLoS Computational Biology 10, 1\u201316 (2014)<br>24. G\u00a8u\u00b8cl\u00a8u, U., van Gerven, M.A.J.: Deep neural networks reveal a gradient in the complexity of neural representations across the ventral stream. Journal of Neuroscience<br>35, 10005\u201310014 (2015) 25. Hazelwood, K., Bird, S., Brooks, D., Chintala, S., Diril, U., Dzhulgakov, D., Fawzy, M., Jia, B., Jia, Y., Kalro, A., Law, J., Lee, K., Lu, J., Noordhuis, P., Smelyanskiy, M., Xiong, L., Wang, X.: Applied machine learning at Facebook: A datacenter infrastructure perspective. In: IEEE International Symposium on High Performance Computer Architecture (HPCA). pp. 620\u2013629 (2018)<br>26. Hebb, D.O.: The Organization of Behavior. John Wiley, New York (1949)<br>27. Heras-Escribano, M.: The Philosophy of Affordances. Palgrave Macmillan, London (2019)<br>28. Hinton, G.E., McClelland, J.L., Rumelhart, D.E.: Distributed representations. In: Rumelhart and McClelland [68], pp. 77\u2013109<br>29. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 28, 504\u2013507 (2006)<br>30. Hubel, D., Wiesel, T.: Receptive fields and functional architecture of mokey striate cortex. Journal of Physiology 195, 215\u2013243 (1968)<br>31. Hutto, D.D., Myin, E.: Radicalizing enactivism: basic minds without content. MIT Press, Cambridge (MA) (2013)<br>32. Isaac, A.M.: Embodied cognition as analog computation. Italian Journal of Cognitive Science 14, 239\u2013259 (2018)<br>33. Karten, H.J.: Vertebrate brains and evolutionary connectomics: on the origins of the mammalian neocortex. Philosophical transactions of the Royal Society B 370, 20150060 (2015)<br>34. Kay, K.N., Winawer, J., Mezer, A., Wandell, B.A.: Compressive spatial summation in human visual cortex. Journal of Neurophysiology 110, 481 \u2014 494 (2013)<br>35. Khaligh-Razavi, S.M., Kriegeskorte, N.: Deep supervised, but not unsupervised, models may explain it cortical representation. PLoS Computational Biology 10, e1003915 (2014)<br>36. Khan, S., Tripp, B.P.: One model to learn them all. CoRR abs\/1706.05137 (2017)<br>37. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: Proceedings of International Conference on Learning Representations (2014)<br>38. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. In: Proceedings of International Conference on Learning Representations (2014)<br>39. Klein, B., Harvey, B.M., Dumoulin, S.O.: Attraction of position preference by spatial attention throughout human visual cortex. Neuron 84, 227\u2013237 (2014)<br>40. Kriegeskorte, N.: Relating population-code representations between man, monkey, and computational models. Frontiers in Neuroscience 3, 363\u2013373 (2009)<br>41. Kriegeskorte, N., Mur, M., Bandettini, P.: Representational similarity analysis \u2014 connecting the branches of systems neuroscience. Frontiers in Systems Neuroscience 2, 4 (2009)<br>42. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems. pp. 1090\u20131098 (2012)<br>43. Lake, B.M., Ullman, T.D., Tenenbaum, J.B., Gershman, S.J.: Building machines that learn and think like people. Behavioral and Brain Science 40, 1\u201372 (2017)<br>44. Lakoff, G., Johnson, M.: Philosophy in the Flesh. The Embodied Mind and its Challenge to Western Thought. Basic Books, New York (1999)<br>45. Landgrebe, J., Smith, B.: Making AI meaningful again. Synthese&nbsp;<a href=\"https:\/\/doi.org\/10.1007\/s11229-019-02192-y,\" rel=\"noreferrer noopener\" target=\"_blank\">https:\/\/doi.org\/10.1007\/s11229-019-02192-y,<\/a>&nbsp;1\u201321 (2019)<br>46. LeCun, Y., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., Jackel, L.D.: Backpropagation applied to handwritten zip code recognition. Neural Computation 1, 541\u2013551 (1989)<br>47. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proceedings of the IEEE 86, 2278\u20132324 (1998)<br>48. Lehky, S.R., Tanaka, K.: Neural representation for object recognition in inferotemporal cortex. Current Opinion in Neurobiology 37, 23\u201335 (2016)<br>49. Liu, W., Wang, Z., Liu, X., Zeng, N., Liu, Y., Alsaadi, F.E.: A survey of deep neural network architectures and their applications. Neurocomputing 234, 11\u201326 (2017)<br>50. L\u00b4opez-Rubio, E.: Computational functionalism for the deep learning era. Minds and Machines 28, 667\u2013688 (2018)<br>51. Markov, N., Ercsey-Ravasz, M.M., Gomes, A.R.R., Lamy, C., Magrou, L., Vezoli, J., Misery, P., Falchier, A., Quilodran, R., Gariel, M.A., Sallet, J., Gamanut, R., Huissoud, C., Clavagnier, S., Giroud, P., Sappey-Marinier, D., Barone, P., Dehay, C., Toroczkai, Z., Knoblauch, K., Essen, D.C.V., Kennedy, H.: A weighted and directed interareal connectivity matrix for macaque cerebral cortex. Cerebral Cortex 24, 17\u201336 (2014)<br>52. Mi lkowski, M.: Embodied cognition meets multiple realizability. Italian Journal of Cognitive Science 14, 349\u2013364 (2018)<br>53. No\u00a8e, A.: Action in Perception. MIT Press, Cambridge (MA) (2004)<br>54. O\u2019Brien, G., Opie, J.: Notes toward a structuralist theory of mental representation. In: Clapin, H., Staines, P., Slezak, P. (eds.) Representation in Mind \u2014 New Approaches to Mental Representation. Elsevier, Amsterdam (2004)<br>55. Olshausen, B.A.: Perception as an inference problem. In: Gazzaniga, M.S. (ed.) The Cognitive Neurosciences, pp. 295\u2013304. MIT Press, Cambridge (MA) (2014), fifth edition<br>56. O\u2019Regan, J.K., No\u00a8e, A.: A sensorimotor account of vision and visual consciousness. Behavioral and Brain Science 24, 939\u20131031 (2001)<br>57. Pfeifer, R., Bongard, J.: How the body shapes the way we think: a new view of intelligence. MIT Press, Cambridge (MA) (2007)<br>58. Piccinini, G.: Computation and representation in cognitive neuroscience. Minds and Machines 28, 1\u20136 (2018)<br>59. Plebe, A.: Cognition and computation. Italian Journal of Cognitive Science 14, 281\u2013286 (2018)<br>60. Plebe, A., De La Cruz, V.M.: Neural representations beyond \u201cplus X\u201d. Minds and Machines 28, 93\u2013117 (2018)<br>61. Prior, H., Schwarz, A., G\u00a8unt\u00a8urk\u00a8un, O.: Mirror-induced behavior in the magpie (pica pica): Evidence of self-recognition. PLoS Biology 6, 1\u20139 (2008)<br>62. Ramachandran, V., Arnel, C., Foster, C., Stoddard, R.: Object recognition can drive motion perception. Nature 395, 852\u2013853 (1998)<br>63. Rawat, W., Wang, Z.: Deep convolutional neural networks for image classification: A comprehensive review. Neural Computation 29, 2352\u20132449 (2017)<br>64. Rezende, D.J., Mohamed, S., Wierstra, D.: Stochastic backpropagation and approximate inference in deep generative models. In: Xing, E.P., Jebara, T. (eds.) Proceedings of Machine Learning Research. pp. 1278\u20131286 (2014)<br>65. Robinson, L., Rolls, E.T.: Invariant visual object recognition: biologically plausible approaches. Biological Cybernetics 109, 505\u2013535 (2015)<br>66. Rolls, E.T., Stringer, S.M.: Invariant visual object recognition: A model, with lighting invariance. Journal of Physiology \u2014 Paris 100, 43\u201362 (2006)<br>67. Rosenfeld, A.: Picture Processing by Computer. Academic Press, New York (1969)<br>68. Rumelhart, D.E., McClelland, J.L. (eds.): Parallel Distributed Processing: Explorations in the Microstructure of Cognition. MIT Press, Cambridge (MA) (1986)<br>69. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet large scale visual recognition challenge. International Journal of Computer Vision 115, 211\u2013252 (2015)<br>70. de Sa, V.R., Ballard, D.H.: Category learning through multi-modality sensing. Neural Computation 10(5) (1998)<br>71. Schmidhuber, J.: Deep learning in neural networks: An overview. Neural Networks 61, 85\u2013117 (2015)<br>72. Schmidt, M., Roux, N.L., Bach, F.: Minimizing finite sums with the stochastic average gradient. Mathematical Programming 162, 83\u2013112 (2017)<br>73. Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., Hassabis, D.: Mastering the game of Go with deep neural networks and tree search. Nature 529, 484\u2013489 (2016)<br>74. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs\/1409.1556 (2015)<br>75. Stigliani, A., Jeska, B., Grill-Spector, K.: Encoding model of temporal processing in human visual cortex. Proceedings of the Natural Academy of Science USA 1914, E11047\u2013E11056 (2017)<br>76. Swoyer, C.: Structural representation and surrogative reasoning. Synthese 87, 449 \u2013 508 (1991)<br>77. Tripp, B.P.: Similarities and differences between stimulus tuning in the inferotemporal visual cortex and convolutional networks. In: International Joint Conference on Neural Networks. pp. 3551\u20133560 (2017)<br>78. Van Essen, D.C.: Organization of visual areas in macaque and human cerebral cortex. In: Chalupa, L., Werner, J. (eds.) The Visual Neurosciences. MIT Press, Cambridge (MA) (2003)<br>79. Van Essen, D.C., DeYoe, E.A.: Concurrent processing in the primate visual cortex. In: Gazzaniga, M.S. (ed.) The Cognitive Neurosciences. MIT Press, Cambridge (MA) (1994)<br>80. VanRullen, R.: Perception science in the age of deep neural networks. Frontiers in Psychology 8, 142 (2017)<br>81. Volpi, N.C., Quinton, J.C., Pezzulo, G.: How active perception and attractor dynamics shape perceptual categorization: a computational model. Neural Networks 60, 1\u201316 (2014)<br>82. von der Malsburg, C.: Network self-organization. In: Zornetzer, S.F., Davis, J., Lau, C. (eds.) An Introduction to Neural and Electronic Networks. Academic Press, New York (1990)<br>83. Wallis, G., Rolls, E.: Invariant face and object recognition in the visual system. Progress in Neurobiology 51, 167\u2013194 (1997)<br>84. Willshaw, D.J., von der Malsburg, C.: How patterned neural connections can be set up by self-organization. Proceedings of the Royal Society of London B194, 431\u2013445 (1976)<\/p>\n\n\n\n<p id=\"2280\"><a href=\"https:\/\/medium.com\/cogist\/derin-%C3%B6%C4%9Frenme-ve-bedenlenme-perconti-plebe-2ebc269b788a#_ftnref1\" target=\"_blank\" rel=\"noopener\">[1]<\/a>&nbsp;PDP (Parallel Distributed Processing) bir bili\u015f modelidir. Bilginin temsilinin bir dizi varsay\u0131msal n\u00f6ral birimler \u00fczerine aktivasyon \u00f6r\u00fcnt\u00fcleri olarak da\u011f\u0131lm\u0131\u015f oldu\u011fu fikrine dayan\u0131r. Bu n\u00f6ral birimler birbirine paraleldir ve interaktif bir \u015fekilde i\u015flev g\u00f6r\u00fcr (\u00c7.N.).<\/p>\n\n\n\n<p id=\"871e\"><a href=\"https:\/\/medium.com\/cogist\/derin-%C3%B6%C4%9Frenme-ve-bedenlenme-perconti-plebe-2ebc269b788a#_ftnref2\" target=\"_blank\" rel=\"noopener\">[2]<\/a>&nbsp;Bili\u015fsel etoloji, bilin\u00e7 ve kas\u0131tl\u0131l\u0131k (intentionality) gibi zihinsel deneyimlerin hayvanlarda incelenmesidir. Ayr\u0131ca, bu alan bu deneyimlerin hayvanlar \u00e7evreleriyle etkile\u015fime ge\u00e7ti\u011finde hayvanlar \u00fczerinde nas\u0131l etkiler yaratt\u0131\u011f\u0131n\u0131 inceler (\u00c7.N.).<\/p>\n\n\n\n<p id=\"8749\"><a href=\"https:\/\/medium.com\/cogist\/derin-%C3%B6%C4%9Frenme-ve-bedenlenme-perconti-plebe-2ebc269b788a#_ftnref3\" target=\"_blank\" rel=\"noopener\">[3]<\/a>&nbsp;Stokastik gradyan ini\u015fi makine \u00f6\u011frenmesinde kar\u015f\u0131la\u015f\u0131lan problemleri \u00e7\u00f6zmede kullan\u0131lan bir optimizasyon algoritmas\u0131d\u0131r (\u00c7.N.).<\/p>\n\n\n\n<p id=\"e4b7\"><a href=\"https:\/\/medium.com\/cogist\/derin-%C3%B6%C4%9Frenme-ve-bedenlenme-perconti-plebe-2ebc269b788a#_ftnref4\" target=\"_blank\" rel=\"noopener\">[4]<\/a>&nbsp;Neocognitron, 1979 y\u0131l\u0131nda Kunihiko Fukushima taraf\u0131ndan \u00f6nerilen hiyerar\u015fik, \u00e7ok katmanl\u0131 bir yapay sinir a\u011f\u0131d\u0131r (\u00c7.N.).<\/p>\n\n\n\n<p id=\"fe31\"><a href=\"https:\/\/medium.com\/cogist\/derin-%C3%B6%C4%9Frenme-ve-bedenlenme-perconti-plebe-2ebc269b788a#_ftnref5\" target=\"_blank\" rel=\"noopener\">[5]<\/a>&nbsp;fMRI, beynin \u00e7e\u015fitli b\u00f6lgelerinde oksijen seviyesinin kar\u015f\u0131la\u015ft\u0131r\u0131lmas\u0131 \u00fczerinden, herhangi bir aktivite esnas\u0131nda hangi beyin b\u00f6lgelerinin aktif oldu\u011funu g\u00f6rmemizi sa\u011flayan bir tekniktir. Fakat bu teknikte h\u00fccreleri teker teker inceleyecek \u00e7\u00f6z\u00fcn\u00fcrl\u00fc\u011fe sahip de\u011filiz; binlerce hatta belki milyonlarca h\u00fccreden olu\u015fan birimlerin aktivasyon seviyesini g\u00f6rebiliriz. \u0130\u015fte fMRI\u2019da aktivitesini g\u00f6zlemleyebildi\u011fimiz en k\u00fc\u00e7\u00fck birimlerden her birine voksel denir.<\/p>\n\n\n\n<p id=\"c483\"><a href=\"https:\/\/medium.com\/cogist\/derin-%C3%B6%C4%9Frenme-ve-bedenlenme-perconti-plebe-2ebc269b788a#_ftnref6\" target=\"_blank\" rel=\"noopener\">[6]<\/a>&nbsp;V1 serebral korteksin ana g\u00f6rme alan\u0131d\u0131r (\u00c7.N.).<\/p>","protected":false},"featured_media":1858,"template":"","meta":{"_acf_changed":false},"event_publishing_tags":[84,226,524,415,196,737,94,93,691,64,92,227,222,125,736,74,228,229,858,75,223,859,61],"kategori":[305],"class_list":["post-1857","blog_content","type-blog_content","status-publish","has-post-thumbnail","hentry","event_publishing_tags-artificial-intelligence","event_publishing_tags-artificial-learning","event_publishing_tags-bedenlenme","event_publishing_tags-bedenlenmis-bilis","event_publishing_tags-bedensel-bilis","event_publishing_tags-bedensellik","event_publishing_tags-bilis","event_publishing_tags-bilissel-bilim","event_publishing_tags-cogist","event_publishing_tags-cognition","event_publishing_tags-cognitive-science","event_publishing_tags-deep-learning","event_publishing_tags-derin-ogrenme","event_publishing_tags-embodied-cognition","event_publishing_tags-embodiment","event_publishing_tags-felsefe","event_publishing_tags-machine-learning","event_publishing_tags-makine-ogrenmesi","event_publishing_tags-neural-networks","event_publishing_tags-philosophy","event_publishing_tags-yapay-ogrenme","event_publishing_tags-yapay-sinir-aglari","event_publishing_tags-yapay-zeka","kategori-ceviri"],"acf":[],"_links":{"self":[{"href":"https:\/\/cog-ist.com\/en\/wp-json\/wp\/v2\/blog_content\/1857","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cog-ist.com\/en\/wp-json\/wp\/v2\/blog_content"}],"about":[{"href":"https:\/\/cog-ist.com\/en\/wp-json\/wp\/v2\/types\/blog_content"}],"version-history":[{"count":0,"href":"https:\/\/cog-ist.com\/en\/wp-json\/wp\/v2\/blog_content\/1857\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cog-ist.com\/en\/wp-json\/wp\/v2\/media\/1858"}],"wp:attachment":[{"href":"https:\/\/cog-ist.com\/en\/wp-json\/wp\/v2\/media?parent=1857"}],"wp:term":[{"taxonomy":"event_publishing_tags","embeddable":true,"href":"https:\/\/cog-ist.com\/en\/wp-json\/wp\/v2\/event_publishing_tags?post=1857"},{"taxonomy":"kategori","embeddable":true,"href":"https:\/\/cog-ist.com\/en\/wp-json\/wp\/v2\/kategori?post=1857"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}