Марков ланац

Историја

TheproposaloftheMarkovchaincomesfromtheRussianmathematicianAndreyMarkov(АндрейАндреевичМарков).IntheresearchpublishedbyMarkovbetween1906and1907,inordertoprovethattheindependencebetweenrandomvariablesisnotanecessaryconditionfortheestablishmentoftheweaklawoflargenumbersandthecentrallimittheorem(centrallimittheorem),heconstructedabasisArandomprocessinwhichconditionalprobabilitiesdependoneachother,anditisprovedthatitconvergestoasetofvectorsundercertainconditions.ThisrandomprocessiscalledaMarkovchaininlatergenerations.

AftertheMarkovchainwasproposed,PaulEhrenfestandTatianaAfanasyevausedtheMarkovchaintoestablishtheEhrenfestmodelofdiffusionin1907.In1912,JulesHenriPoincaréstudiedMarkovchainsonfinitegroupsandobtainedPoincaréinequality.

In1931,AndreyKolmogorov(АндрейНиколаевичКолмогоров)extendedtheMarkovchaintothecontinuousexponentialsetinthestudyofthediffusionproblemandobtainedthecontinuous-timeMarkovchain,Andintroducedthecalculationformulaofitsjointdistributionfunction.IndependentofKolmogorov,in1926,SydneyChapmanalsoobtainedthecalculationformulawhenstudyingBrownianmotion,whichisthelaterChapman-Kolmogorovequation.

In1953,NicholasMetropolisetal.completedtherandomsimulationofthefluidtargetdistributionfunctionbyconstructingaMarkovchain.ThismethodwasfurtherimprovedbyWilfredK.Hastingsin1970anddevelopedintothecurrentMetreRobolis-Hastingsalgorithm(Metropolis-Hastingsalgorithm).

In1957,RichardBellmanfirstproposedMarkovDecisionProcesses(MDP)throughadiscretestochasticoptimalcontrolmodel.

1959-1962,theformerSovietmathematicianEugeneBorisovichDynkinperfectedKolmogorov’stheoryandusedtheDynkinformulatocomparethestationaryМарковпроцессwiththemartingaleprocess.connect.

BasedonMarkovchains,morecomplexMarkovmodels(suchasHiddenMarkovModelsandMarkovRandomFields)havebeenproposedoneafteranotherandobtainedfrompracticalproblemssuchaspatternrecognitionApplied.

Дефиниција

AMarkovchainisasetofdiscreterandomvariableswithMarkovproperties.Specifically,fortherandomvariablesetwithaone-dimensionalcountablesetastheindexsetintheprobabilityspace,ifthevalues​​oftherandomvariablesareallinthecountablesetInside:,andtheconditionalprobabilityoftherandomvariablesatisfiesthefollowingrelationship:

TheniscalledaMarkovchain,Thecountablesetiscalledthestatespace,andthevalueoftheMarkovchaininthestatespaceiscalledthestate.TheMarkovchaindefinedhereisadiscrete-timeMarkovchain(Discrete-TimeMC,DTMC).Althoughithasacontinuousexponentset,itiscalledacontinuous-timeMarkovchain(Continuous-TimeMC,CTMC),butInessence,itisaМарковпроцесс.Commonly,theindexsetofaMarkovchainiscalled"step"or"time-step".

TheaboveformuladefinestheMarkovpropertywhiledefiningtheMarkovchain.Thispropertyisalsocalled"memorylessness",thatis,therandomvariableatstept+1isgivenAfterstept,therandomvariableisconditionallyindependentfromtherestoftherandomvariables:.Onthisbasis,theMarkovchainhasstrongMarkovproperty,thatis,foranystoppingtime,thestateoftheMarkovchainbeforeandafterthestoppingtimeisindependentofeachother.

Екпланаториекампле

AcommonexampleofMarkovchainisthesimplifiedstockfluctuationmodel:ifastockrisesinoneday,itwillbeThereisaprobabilitythatpwillstarttofalland1-pwillcontinuetorise;ifthestockfallsinoneday,thereisaprobabilitythatqwillstarttorisetomorrowand1-qwillcontinuetofall.TheriseandfallofthestockisaMarkovchain,andtheconceptsinthedefinitionhavethefollowingcorrespondenceintheexample:

  • Случајна променљива:стање залихакондаит;стање;Размак:"растући"и"падајући";индексни скуп:број дана.

  • Conditionalprobabilityrelationship:Bydefinition,evenifallthehistoricalstatesofthestockareknown,itsriseorfallonacertaindayisonlyrelatedtothestateofthepreviousday.

  • Nomemory:Thestock’sperformanceonthedayisonlyrelatedtothepreviousdayandhasnothingtodowithotherhistoricalstates(theconditionalprobabilityrelationshipisdefinedwhilethememorylessnessisdefined).

  • Thestatebeforeandafterthestopisindependentofeachother:takeoutthestock’sriseandfallrecords,andtheninterceptasectionfromit.Wecan’tknowwhichsectionwasintercepted,becausetheinterceptionpointisthestoptime.Therecordsbeforeandaftert(t-1andt+1)havenodependencies.

н-ред Марковцхаин

н-ред МарковцхаинHavingn-thordermemorycanberegardedasageneralizationofMarkovchain.ByanalogytothedefinitionofMarkovchain,theн-ред Марковцхаинsatisfiesthefollowingconditions:

Accordingtotheaboveformula,thetraditionalMarkovchaincanbeconsideredasa1-orderMarkovchainKofuchain.AccordingtotheMarkovproperty,wecangetanн-ред МарковцхаинbytakingmultipleMarkovchainsascomponents.

Теорија и својства

Трансфертхеори

ThechangeofthestateofarandomvariableinaMarkovchainovertimeiscalledevolutionortransfer(transition).HereweintroducetwowaystodescribethestructureofMarkovchain,namelytransitionmatrixandtransitiongraph,anddefinethepropertiesofMarkovchainintheprocessoftransition.

Transitionprobability(transitionprobability)andtransitionmatrix(transitionmatrix)

Mainentry:Transitionmatrix

MarkovchainTheconditionalprobabilitybetweenrandomvariablescanbedefinedasthefollowingformsof(single-step)transitionprobabilityandn-steptransitionprobability:

wherethesubscript

Indicatesthetransferofthenthstep.AccordingtotheMarkovproperty,aftertheinitialprobabilityisgiven,thecontinuousmultiplicationofthetransitionprobabilitycanrepresentthefinite-dimensionaldistributionoftheMarkovchain:

Theintheformulaisthesamplepath,thatis,thevalueofeachstepoftheMarkovchain.Forthen-steptransitionprobability,itcanbeknownfromtheChapman–Kolmogorovequationthatitsvalueisthesumofallsampletracks:

TheaboveequationshowsthatMarlThedirectevolutionofthekoffchainbynstepsisequivalenttoitsfirstevolutionbynkstepsandthenbyksteps(kisinthestatespaceoftheMarkovchain).Theproductofthen-steptransitionprobabilityandtheinitialprobabilityiscalledtheabsoluteprobabilityofthestate.

IfthestatespaceofaMarkovchainislimited,thetransitionprobabilitiesofallstatescanbearrangedinamatrixinasingle-stepevolutiontoobtainthetransitionmatrix:

ThetransitionmatrixoftheMarkovchainistherightstochasticmatrix,andtherowofthematrixrepresentswhenistakenTheprobabilityofallpossiblestates(discretedistribution),sotheMarkovchaincompletelydeterminesthetransitionmatrix,andthetransitionmatrixalsocompletelydeterminestheMarkovchain.Accordingtothenatureoftheprobabilitydistribution,thetransitionmatrixis​​apositivedefinitematrix,andthesumoftheelementsineachrowisequalto1:Then-steptransitionmatrixcanalsobedefinedinthesameway:
,Fromthenatureofthen-steptransitionprobability(Chapman-Kolmogorovequation),then-steptransitionmatrixis​​thecontinuousmatrixmultiplicationofallprevioustransitionmatrices:.

Транситионграпх

1.Аццессиблеандцоммуницате

TheevolutionofaMarkovchaincanberepresentedasatransitiongraphinagraphstructure,andeachedgeinthegraphisassignedatransitionprobability.Theconceptof"reachable"and"connected"canbeintroducedthroughthetransitiongraph:

IfthestateintheMarkovchainis:,thatisAlltransitionprobabilitiesonthesamplingpatharenot0,thenthestateisthereachablestateofthestate,whichisrepresentedasadirectedconnectioninthetransitiondiagram:

.Ifismutuallyreachable,thetwoareconnected,formingaclosedloopinthetransitiondiagram,whichismarkedas.Bydefinition,reachabilityandconnectivitycanbeindirect,thatis,itdoesnothavetobecompletedinasingletimestep.

Connectivityisasetofequivalencerelations,soequivalenceclassescanbeconstructed.InMarkovchains,equivalenceclassesthatcontainasmanystatesaspossiblearecalledcommunicatingclasses.).

2.Цлоседсетандабсорбингстате

Asubsetofagivenstatespace,suchasaMarkovchainAfterenteringthesubset,youcannotleave:,thenthesubsetisclosed,calledaclosedset,andallstatesoutsideaclosedsetarenotreachable.Ifthereisonlyonestateintheclosedset,thestateisanabsorptionstate,whichisaself-loopwithaprobabilityof1inthetransitiondiagram.Aclosedsetcanincludeoneormoreconnectedclasses.

3.Анекамплеофатранситиондиаграм

Here,anexampleofatransitiondiagramisusedtoillustratetheaboveconcepts:

BydefinitionItcanbeseenthatthetransitiongraphcontainsthreeconnectedclasses:,threeclosedsets:,andanabsorptionstate,state6.Notethatintheabovetransitiondiagram,theMarkovchainwilleventuallyentertheabsorbingstatefromanystate.ThistypeofMarkovchainiscalledanabsorbingMarkovchain.

Својства

HerewedefinethefourpropertiesofMarkovchains:несводивост,recurrence,periodicityandергодичност.DifferentfromMarkovproperties,thesepropertiesarenotnecessarilythepropertiesoftheMarkovchain,butthepropertiesthatitexhibitstoitsstateduringthetransferprocess.Theabovepropertiesareallexclusive,thatis,Markovchainsthatarenotreduciblearenecessarilyirreducible,andsoon.

несводивост

IfthestatespaceofaMarkovchainhasonlyoneconnectedclass,thatis,allthemembersofthestatespace,thenTheMarkovchainisirreducible,otherwisetheMarkovchainhasreducibility(reducibility).TheнесводивостoftheMarkovchainmeansthatduringitsevolution,randomvariablescanbetransferredbetweenanystates.

Понављање

IftheMarkovchainreachesastate,itcanreturntothatstaterepeatedlyduringevolution,thenThestateisarecurrentstate,ortheMarkovchainhasa(local)recurrence,otherwiseithasatransient(transience).Formally,forastateinthestatespace,thereturntimeofaMarkovchainforagivenstateistheinfimumofallpossiblereturntimes:

If,thereisnotransientorrecurrenceinthisstate;if,thecriteriaforjudgingthetransientandrecurrenceofthisstateareasfollows:

Whenthetimesteptendstoinfinity,thereturnprobabilityoftherecurrentstate(returnprobability),thatis,theexpectationofthetotalnumberofvisitsalsotendstoinfinity:

Inaddition,ifthestateisrecurring,themeanrecurrencetimecanbecalculated:

Iftheaveragereturntimeis,thestatusis"positiverecurrent",otherwiseitis"nullrecurrent".Ifastateiszero-returning,itmeansthattheexpectationofthetimeintervalbetweentwovisitsoftheMarkovchaintothestateispositiveinfinity.

Fromtheabovedefinitionoftransientnessandrecurrence,thefollowinginferencescanbemade:

  1. Inference:forafinitenumberofstatesofMarrTheKofuchainhasatleastonerecurrentstate,andallrecurrentstatesarenormal.

  2. Inference:IfaMarkovchainwithafinitenumberofstatesisirreducible,allofitsstatesreturnnormally.

  3. Inference:IfstateAisrecursiveandstateBisreachablestateofA,thenAandBareconnected,AndBisafrequentreturn.

  4. Inference:IfstateBisthereachablestateofA,andstateBistheabsorbingstate,thenBistherecurrentstate,andAistheinstantaneousstate.Changestate.

  5. Inference:Thesetcomposedofthenormalreturnstateisaclosedset,butthestateintheclosedsetmaynotbearecurrentstate.

Периодичност

AMarkovchainthatreturnsnormallymayhaveperiodicity,thatis,initsDuringtheevolution,theMarkovchaincanoftenreturntoitsstatewithaperiodgreaterthan1.Formally,givenanormalreturnstate,itsreturnperiodiscalculatedasfollows:

wheremeansTakethegreatestcommondivisorofthesetelements.Forexample,ifinthetransitiondiagram,thenumberofstepsrequiredforaMarkovchaintoreturntoacertainstateis,thenitscycleis3,whichistheminimumnumberofstepsrequiredtoreturntothisstate.Ifiscalculatedaccordingtotheaboveformula,thestateisperiodic.If,thestateisaperiodicity.Fromthedefinitionofperiodicity,thefollowinginferencescanbemade:

  1. Inference:Theabsorptionstateisanon-periodicstate.

  2. Inference:IfstateAandstateBareconnected,thenAandBhavethesamecycle.

  3. Inference:IfanirreducibleMarkovchainhasaperiodicstateA,thenallstatesoftheMarkovchainareperiodicstates.

ергодичност

IfastateoftheMarkovchainisnormallyreturnedandaperiodic,Thestateisergodic.IfaMarkovchainisnotreducible,andacertainstateisergodic,thenallstatesoftheMarkovchainareergodic,whichiscalledtheergodicchain.Fromtheabovedefinition,theергодичностhasthefollowinginferences:

  1. Inference:IfstateAisanabsorbingstate,andAisthereachablestateofstateB,thenAistraversed,Bisnottraversed.

  2. Inference:IfaMarkovchainofmultiplestatescontainsabsorbingstates,thentheMarkovchainisnotanergodicchain.

  3. Inference:IfaMarkovchainofmultiplestatesformsadirectedacyclicgraph,orasingleclosedloop,thentheMarkovchainisnotTraversethechain.

Theergodicchainisanon-periodicstableMarkovchainwithsteady-statebehavioronalong-termscale,soitisaMarkovchainthathasbeenwidelystudiedandapplied.

Стеади-статеаналисис

Hereweintroducethedescriptionofthelong-timescalebehaviorofMarkovchain,namelystationarydistributionandlimitdistribution,anddefinestationaryMarkovchain.

Stationarydistribution(stationarydistribution)

GivenaMarkovchain,ifthereisaprobabilitydistributioninitsstatespaceandthedistributionmeetsthefollowingconditions:

ThenisthestationarydistributionoftheMarkovchain.Whereisthetransitionmatrixandtransitionprobability.Thelinearequationsontherightsideoftheequivalentsymbolarecalledbalanceequations.Further,ifastationarydistributionoftheMarkovchainexists,anditsinitialdistributionisastationarydistribution,thentheMarkovchainisinasteadystate.Fromageometricpointofview,considerthestationarydistributionofMarkovchainsas,sothesupportsetofthisdistributionisastandardsimplex.

ForanirreducibleMarkovchain,ifandonlyifithasauniquestationarydistribution,thatis,whenthebalanceequationhasauniquesolutiononthepositivesimplex,theMarkovchainThechainisreturnednormally,andthestationarydistributionisexpressedasfollows:

Theaboveconclusioniscalledthestationarydistributioncriterion.ForirreducibleandrecurrentMarkovchains,solvingthebalanceequationcangettheonlyeigenvectorexceptthescale,thatis,theinvariantmeasure.IftheMarkovchainisnormallyrecursive,itsstationarydistributionisTheeigenvectorwhentheeigenvalueis1isobtainedwhensolvingthebalanceequation,thatis,theinvariantmeasureafternormalization.Therefore,thenecessaryandsufficientconditionfortheMarkovchaintohaveastabledistributionisthatithasanormalreturnstate.Inaddition,itcanbeknownbyexamplethatifaMarkovchaincontainsmultipleconnectedclassescomposedofnormalreturnstates(thepropertyshowsthattheyareallclosedsets,sotheMarkovchainisnotnormalreturn),theneachconnectedclasshasHaveastationarydistribution,andthesteadystateoftheevolutiondependsontheinitialdistribution.

Limitingdistribution(limitingdistribution)

IfthereisaprobabilitydistributioninthestatespaceofaMarkovchain

andsatisfythefollowingrelationship:ThenthedistributionisthelimitdistributionoftheMarkovchain.Notethatthedefinitionofthelimitdistributionhasnothingtodowiththeinitialdistribution,thatis,foranyinitialdistribution,whenthetimesteptendstoinfinity,theprobabilitydistributionoftherandomvariabletendstothelimitdistribution.Bydefinition,thelimitdistributionmustbeastationarydistribution,buttheoppositeisnottrue.Forexample,aperiodicMarkovchainmayhaveastationarydistribution,buttheperiodicMarkovchaindoesnotconvergetoanydistribution,anditsstationarydistributionisnotalimitdistribution.

1.Лимитингтхеорем

Twoindependentnon-cyclicstationaryMarkovchains,thatis,ifthetraversalchainhasthesametransitionmatrix,Thenwhenthetimesteptendstoinfinity,thedifferencebetweenthetwolimitdistributionstendstozero.Accordingtothecouplingtheoryinrandomprocesses,theconclusionisexpressedas:forthesametraversalchaininthestatespace,givenanyinitialdistribution,thereis:

Wheremeanssupremum.Consideringthenatureofthestationarydistribution,thisconclusionhasacorollary:forthetraversalchain,whenthetimesteptendstoinfinity,itslimitdistributiontendstobeastationarydistribution:

section>ThisconclusionissometimesreferredtoasthelimittheoremofMarkovchain(limittheoremofMarkovchain),indicatingthatiftheMarkovchainisergodic,itslimitdistributionisastationarydistribution.Foranirreducibleandnon-periodicMarkovchain,theergodicchainisequivalenttotheexistenceofitslimitdistribution,anditisalsoequivalenttotheexistenceofitsstabledistribution.

2.Ергодиктеорема(ергодиктеорема)

IfaMarkovchainisanergodicchain,thenbytheergodictheorem,theTheratioofthenumberofvisitstothetimestepapproachesthereciprocaloftheaveragereturntimewhenthetimestepapproachesinfinity,thatis,thestationaryorlimitingdistributionofthestate:

TraversaltheoremTheproofofreliesontheStrongLawofLargeNumbers(SLLN),whichshowsthatregardlessoftheinitialdistributionofthetraversalchain,afterasufficientlylongevolution,multipleobservations(limittheorem)ofoneoftherandomvariablesandmultipleOneobservationofrandomvariables(leftsideoftheaboveequation)cangetanapproximationofthelimitdistribution.Sincetheergodicchainsatisfiesthelimittheoremandtheergodictheorem,МЦМЦbuildstheergodicchaintoensurethatitconvergestoastabledistributionduringiteration.

СтатионариМарковцхаин

IfaMarkovchainhasauniquestationarydistributionandthelimitdistributionconvergestoastationarydistribution,Bydefinition,itisequivalenttothattheMarkovchainisastationaryMarkovchain.AstationaryMarkovchainisastrictlystationaryrandomprocess,anditsevolutionhasnothingtodowiththetimesequence:

ItcanbeknownfromthelimittheoremthatthetraversalchainisastationaryMarkovchain.Inaddition,fromtheabovedefinition,thetransitionmatrixofastationaryMarkovchainisaconstantmatrix,andthen-steptransitionmatrixis​​then-thpoweroftheconstantmatrix.AstationaryMarkovchainisalsocalledatime-homogeneousMarkovchain.Correspondingtoit,iftheMarkovchaindoesnotmeettheaboveconditions,itiscalledanon-stationaryMarkovchain(non-stationaryMarkvochain)ornon-homogeneousMarkovchain(time-inhomogeneousMarkovchain).

IfastationaryMarkovchainsatisfiesthedetailedbalanceconditionforanytwostates,ithasreversibilityandiscalledareversibleMarkovchain(reversibleMarkovchain):

ThereversibilityofaMarkovchainisastricterнесводивост,thatis,itcannotonlytransferbetweenanystates,butalsotransfertoeachstateTheprobabilitiesareequal,sothereversibleMarkovchainisasufficientnon-essentialconditionforastableMarkovchain.InMarkvochainMonteCarlo(МЦМЦ),constructingareversibleMarkovchainthatsatisfiesthefinebalanceconditionisamethodtoensurethatthesamplingdistributionisastabledistribution.

Специјалан случај

Bernoulliprocess(Bernoulliprocess)

Mainentry:Bernoulliprocess

TheBernoulliprocessisalsocalledBinomialMarkovchain.Itsestablishmentprocessisasfollows:GivenaseriesIndependent"identifications",eachidentificationisbinomial,andtheprobabilityistakenaspositiveandnegative.Letthepositiverandomprocessrepresenttheprobabilityofkpositiveidentifiersinnidentifiers,thenitisaBernoulliprocessinwhichtherandomvariablesobeythebinomialdistribution:

Fromtheestablishmentprocess,itcanbeseenthattheprobabilityofpositivesignsintheaddednewsignshasnothingtodowiththenumberofpreviouspositivesigns,andhasMarkovproperties,sotheBernoulliprocessisaMarkovchain.

Проблем с рушевинама коцкара (проблем коцкара)

Види: Коцкарска руина

Assumingthatthegamblerholdsalimitednumberofchipstobetinthecasino,eachbetwillwinorloseonechipwiththeprobabilityof.Ifthegamblerkeepsbetting,thetotalnumberofchipsheholdsisamarkIthasthefollowingtransfermatrix:

Thegambler’slightbetistheabsorptionstate,whichcanbeknownfromone-stepanalysis.When

Atthetime,theMarkovchainwillinevitablyentertheabsorptionstate,thatis,nomatterhowmanychipsthegamblerholds,itwilleventuallyloseoutasthebetprogresses.

Randomwalk(randomwalk)

Mainentry:RandomWandering

Defineaseriesofindependentidenticallydistributed(iid)integerrandomvariables,anddefinethefollowingrandomprocess:

Therandomprocessisarandomwalkintheintegerset,andisthesteplength.Sincethesteplengthisiid,thecurrentstepandthepreviousstepareindependentofeachother,andthisrandomprocessisaMarkovchain.BoththeBernoulliprocessandthebankruptcyproblemofgamblersarespecialcasesofrandomwalk.

Fromtheaboveexampleofrandomwalk,wecanseethatMarkovchainshavegeneralconstructionmethods.Specifically,iftherandomprocessinthestatespaceis

Therearesatisfyingforms:

Amongthem,andareiidrandomvariablesofspaceandIndependentof,therandomprocessisaMarkovchain,anditsone-steptransitionprobabilityis:.ThisconclusionshowsthattheMarkovchaincanbenumericallysimulatedbyrandomvariables(randomnumbers)thatfollowtheuniformdistributionofiidintheintervalof.

Промоција

Марковпроцесс

Mainentry:Марковпроцесс

МарковпроцессisalsocalledcontinuoustimeMarkovchain,itisMarkovchainordiscretetimeMarkovchainInthegeneralizationof,itsstatespaceisacountableset,buttheone-dimensionalindexsetnolongerhasthelimitofacountablesetandcanrepresentcontinuoustime.ThepropertiesofaМарковпроцессandaMarkovchaincanbecompared,anditsMarkovpropertyisusuallyexpressedasfollows:

SincethestatespaceoftheМарковпроцессisForasetofnumbers,thesampletrackincontinuoustimeisalmostnecessarily(as)arightcontinuoussegmentfunction,sotheМарковпроцессcanbeexpressedasajumpprocessandrelatedtotheMarkovchain:

whereisthesojourntimeofacertainstate,andisthesequentialindexsetmember(timesegment).TheMarkovchainandtheresidencetimesatisfyingtheaboverelationshiparethejumpingprocesslocallyembeddedprocessunderalimitedtimesegment..

Markovmodel(Markovmodel)

Главни чланак:Марков модел

MarkovchainorМарковпроцессisnottheonlyrandomprocessbasedonMarkovproperty.Infact,hiddenMarkovmodel,Markovdecisionprocess,MarkovrandomprocessStochasticprocesses/randommodelssuchasairportshaveMarkovpropertiesandarecollectivelyreferredtoasMarkovmodels.HereisabriefintroductiontoothermembersoftheMarkovmodel:

1.ХидденМарковМодел(ХидденМарковМодел,ХММ)

HMMisAstatespaceisnotcompletelyvisible,thatis,aMarkovchaincontaininghiddenstatus.ThevisiblepartoftheHMMiscalledtheemissionstate,whichisrelatedtothehiddenstate,butitisnotenoughtoformacompletecorrespondence.Takespeechrecognitionasanexample.Thesentencethatneedstoberecognizedisinaninvisiblehiddenstate,andthereceivedvoiceoraudioistheoutputstaterelatedtothesentence.Atthistime,thecommonapplicationofHMMisbasedontheMarkovnatureandderivedfromthevoiceinput.Thecorrespondingsentenceistoreversethehiddenstatefromtheoutputstate.

2.Markovdecisionprocess(Markovdecisionprocess,MDP):

MDPintroduces"actions"onthebasisofstatespaceTheMarkovchain,thatis,thetransitionprobabilityoftheMarkovchainisnotonlyrelatedtothecurrentstate,butalsorelatedtothecurrentaction.MDPincludesasetofinteractiveobjects,namelyagentandenvironment,anddefines5modelelements:state,action,policy,rewardandreturn.AmongthemStrategyisthemappingfromstatetoaction,andrewardisthediscountoraccumulationofrewardovertime.IntheevolutionofMDP,theagentperceivestheinitialstateoftheenvironment,implementsactionsaccordingtothestrategy,andtheenvironmententersanewstateundertheinfluenceoftheactionandfeedsbacktotheagentareward.Theagentreceivesthe"reward"andadoptsnewstrategiestocontinuouslyinteractwiththeenvironment.MDPisoneofthemathematicalmodelsofreinforcementlearning(reinforcementlearning),whichisusedtosimulatetherandomnessstrategiesandrewardsthatcanbeachievedbyagents.OneofthepopularizationsofMDPisthepartiallyobservableMarkovdecisionprocess(POMDP),whichconsidersthehiddenstateandoutputstateoftheMDPintheHMM.

3.МарковРандомФиелд(МарковРандомФиелд,МРФ)

MRFisaMarkovchainfromone-dimensionalindexsettohigh-dimensionalspaceПромоција.TheMarkovpropertyofMRFshowsthatthestateofanyrandomvariableisonlydeterminedbythestateofallitsadjacentrandomvariables.Analogoustothefinite-dimensionaldistributioninaMarkovchain,thejointprobabilitydistributionofarandomvariableinMRFistheproductofallcliquescontainingtherandomvariable.ThemostcommonexampleofMRFistheIsingmodel.

Харрисцхаин

ХаррисцхаинisthegeneralizationofMarkovchainfromcountablestatespacetocontinuousstatespace,givenThestationaryMarkovchainonthemeasurablespace,ifanysubsetofthemeasurablespaceandthereturntimeofthesubset

,theMarkovchainsatisfies:

thentheMarkovchainisaХаррисцхаин,whereisameasurablespaceΣ-finitemeasure(σ-finitemeasure).

Апликација

МЦМЦ

BuildingaMarkovchainwithsamplingdistributionasthelimitdistributionisMarkovChainMonteCarlo(MarkovChainMonteCarlo,МЦМЦ)ThecorestepofМЦМЦistocontinuouslyiteratethetimestepsontheMarkovchaintoobtainrandomnumbersthatapproximatelyobeythesamplingdistribution,andusetheserandomnumberstoapproximatethemathematicalexpectationofthetargetforthesamplingdistribution:

ThelimitdistributionnatureofMarkovchaindeterminesthatМЦМЦisunbiasedestimation,thatis,whenthenumberofsamplestendstoinfinity,thetruevalueofthemathematicalexpectationofsolvingthetargetwillbeobtained.ThiscombinesМЦМЦanditsalternativemethod,Forexample,variationalBayesianinference(variationalBayesianinference)isdistinguished,thelatterisusuallylesscomputationallyexpensivethanМЦМЦ,butitcannotguaranteeanunbiasedestimate.

Други

Inphysicsandchemistry,MarkovchainsandМарковпроцессesareusedtomodeldynamicsystems,formingMarkovdynamics(Markovdynamics).dynamics).Inthequeueingtheory,theMarkovchainisthebasicmodelofthequeuingprocess.Intermsofsignalprocessing,Markovchainsaresomesequentialdatacompressionalgorithms,suchasthemathematicalmodelofZiv-Lempelcoding.Inthefinancialfield,theMarkovchainmodelisusedtopredictthemarketshareofenterpriseproducts.

Related Articles
TOP