Маркова верига

История

TheproposaloftheMarkovchaincomesfromtheRussianmathematicianAndreyMarkov(АндрейАндреевичМарков).IntheresearchpublishedbyMarkovbetween1906and1907,inordertoprovethattheindependencebetweenrandomvariablesisnotanecessaryconditionfortheestablishmentoftheweaklawoflargenumbersandthecentrallimittheorem(centrallimittheorem),heconstructedabasisArandomprocessinwhichconditionalprobabilitiesdependoneachother,anditisprovedthatitconvergestoasetofvectorsundercertainconditions.ThisrandomprocessiscalledaMarkovchaininlatergenerations.

AftertheMarkovchainwasproposed,PaulEhrenfestandTatianaAfanasyevausedtheMarkovchaintoestablishtheEhrenfestmodelofdiffusionin1907.In1912,JulesHenriPoincaréstudiedMarkovchainsonfinitegroupsandobtainedPoincaréinequality.

In1931,AndreyKolmogorov(АндрейНиколаевичКолмогоров)extendedtheMarkovchaintothecontinuousexponentialsetinthestudyofthediffusionproblemandobtainedthecontinuous-timeMarkovchain,Andintroducedthecalculationformulaofitsjointdistributionfunction.IndependentofKolmogorov,in1926,SydneyChapmanalsoobtainedthecalculationformulawhenstudyingBrownianmotion,whichisthelaterChapman-Kolmogorovequation.

In1953,NicholasMetropolisetal.completedtherandomsimulationofthefluidtargetdistributionfunctionbyconstructingaMarkovchain.ThismethodwasfurtherimprovedbyWilfredK.Hastingsin1970anddevelopedintothecurrentMetreRobolis-Hastingsalgorithm(Metropolis-Hastingsalgorithm).

In1957,RichardBellmanfirstproposedMarkovDecisionProcesses(MDP)throughadiscretestochasticoptimalcontrolmodel.

1959-1962,theformerSovietmathematicianEugeneBorisovichDynkinperfectedKolmogorov’stheoryandusedtheDynkinformulatocomparethestationaryмарковски процесwiththemartingaleprocess.connect.

BasedonMarkovchains,morecomplexMarkovmodels(suchasHiddenMarkovModelsandMarkovRandomFields)havebeenproposedoneafteranotherandobtainedfrompracticalproblemssuchaspatternrecognitionApplied.

Определение

AMarkovchainisasetofdiscreterandomvariableswithMarkovproperties.Specifically,fortherandomvariablesetwithaone-dimensionalcountablesetastheindexsetintheprobabilityspace,ifthevalues​​oftherandomvariablesareallinthecountablesetInside:,andtheconditionalprobabilityoftherandomvariablesatisfiesthefollowingrelationship:

TheniscalledaMarkovchain,Thecountablesetiscalledthestatespace,andthevalueoftheMarkovchaininthestatespaceiscalledthestate.TheMarkovchaindefinedhereisadiscrete-timeMarkovchain(Discrete-TimeMC,DTMC).Althoughithasacontinuousexponentset,itiscalledacontinuous-timeMarkovchain(Continuous-TimeMC,CTMC),butInessence,itisaмарковски процес.Commonly,theindexsetofaMarkovchainiscalled"step"or"time-step".

TheaboveformuladefinestheMarkovpropertywhiledefiningtheMarkovchain.Thispropertyisalsocalled"memorylessness",thatis,therandomvariableatstept+1isgivenAfterstept,therandomvariableisconditionallyindependentfromtherestoftherandomvariables:.Onthisbasis,theMarkovchainhasstrongMarkovproperty,thatis,foranystoppingtime,thestateoftheMarkovchainbeforeandafterthestoppingtimeisindependentofeachother.

Обяснителен пример

AcommonexampleofMarkovchainisthesimplifiedstockfluctuationmodel:ifastockrisesinoneday,itwillbeThereisaprobabilitythatpwillstarttofalland1-pwillcontinuetorise;ifthestockfallsinoneday,thereisaprobabilitythatqwillstarttorisetomorrowand1-qwillcontinuetofall.TheriseandfallofthestockisaMarkovchain,andtheconceptsinthedefinitionhavethefollowingcorrespondenceintheexample:

  • Случайна променлива:състоянието на акциите в деня;състояние;Пространство:"нарастващо"и"намаляващо";индексен набор:брой дни.

  • Conditionalprobabilityrelationship:Bydefinition,evenifallthehistoricalstatesofthestockareknown,itsriseorfallonacertaindayisonlyrelatedtothestateofthepreviousday.

  • Nomemory:Thestock’sperformanceonthedayisonlyrelatedtothepreviousdayandhasnothingtodowithotherhistoricalstates(theconditionalprobabilityrelationshipisdefinedwhilethememorylessnessisdefined).

  • Thestatebeforeandafterthestopisindependentofeachother:takeoutthestock’sriseandfallrecords,andtheninterceptasectionfromit.Wecan’tknowwhichsectionwasintercepted,becausetheinterceptionpointisthestoptime.Therecordsbeforeandaftert(t-1andt+1)havenodependencies.

Марковска верига от n-порядък

Марковска верига от n-порядъкHavingn-thordermemorycanberegardedasageneralizationofMarkovchain.ByanalogytothedefinitionofMarkovchain,theМарковска верига от n-порядъкsatisfiesthefollowingconditions:

Accordingtotheaboveformula,thetraditionalMarkovchaincanbeconsideredasa1-orderMarkovchainKofuchain.AccordingtotheMarkovproperty,wecangetanМарковска верига от n-порядъкbytakingmultipleMarkovchainsascomponents.

Теория и свойства

Трансферна теория

ThechangeofthestateofarandomvariableinaMarkovchainovertimeiscalledevolutionortransfer(transition).HereweintroducetwowaystodescribethestructureofMarkovchain,namelytransitionmatrixandtransitiongraph,anddefinethepropertiesofMarkovchainintheprocessoftransition.

Transitionprobability(transitionprobability)andtransitionmatrix(transitionmatrix)

Mainentry:Transitionmatrix

MarkovchainTheconditionalprobabilitybetweenrandomvariablescanbedefinedasthefollowingformsof(single-step)transitionprobabilityandn-steptransitionprobability:

wherethesubscript

Indicatesthetransferofthenthstep.AccordingtotheMarkovproperty,aftertheinitialprobabilityisgiven,thecontinuousmultiplicationofthetransitionprobabilitycanrepresentthefinite-dimensionaldistributionoftheMarkovchain:

Theintheformulaisthesamplepath,thatis,thevalueofeachstepoftheMarkovchain.Forthen-steptransitionprobability,itcanbeknownfromtheChapman–Kolmogorovequationthatitsvalueisthesumofallsampletracks:

TheaboveequationshowsthatMarlThedirectevolutionofthekoffchainbynstepsisequivalenttoitsfirstevolutionbynkstepsandthenbyksteps(kisinthestatespaceoftheMarkovchain).Theproductofthen-steptransitionprobabilityandtheinitialprobabilityiscalledtheabsoluteprobabilityofthestate.

IfthestatespaceofaMarkovchainislimited,thetransitionprobabilitiesofallstatescanbearrangedinamatrixinasingle-stepevolutiontoobtainthetransitionmatrix:

ThetransitionmatrixoftheMarkovchainistherightstochasticmatrix,andtherowofthematrixrepresentswhenistakenTheprobabilityofallpossiblestates(discretedistribution),sotheMarkovchaincompletelydeterminesthetransitionmatrix,andthetransitionmatrixalsocompletelydeterminestheMarkovchain.Accordingtothenatureoftheprobabilitydistribution,thetransitionmatrixis​​apositivedefinitematrix,andthesumoftheelementsineachrowisequalto1:Then-steptransitionmatrixcanalsobedefinedinthesameway:
,Fromthenatureofthen-steptransitionprobability(Chapman-Kolmogorovequation),then-steptransitionmatrixis​​thecontinuousmatrixmultiplicationofallprevioustransitionmatrices:.

Преходна графика

1.Достъпни и комуникативни

TheevolutionofaMarkovchaincanberepresentedasatransitiongraphinagraphstructure,andeachedgeinthegraphisassignedatransitionprobability.Theconceptof"reachable"and"connected"canbeintroducedthroughthetransitiongraph:

IfthestateintheMarkovchainis:,thatisAlltransitionprobabilitiesonthesamplingpatharenot0,thenthestateisthereachablestateofthestate,whichisrepresentedasadirectedconnectioninthetransitiondiagram:

.Ifismutuallyreachable,thetwoareconnected,formingaclosedloopinthetransitiondiagram,whichismarkedas.Bydefinition,reachabilityandconnectivitycanbeindirect,thatis,itdoesnothavetobecompletedinasingletimestep.

Connectivityisasetofequivalencerelations,soequivalenceclassescanbeconstructed.InMarkovchains,equivalenceclassesthatcontainasmanystatesaspossiblearecalledcommunicatingclasses.).

2. Затворено и абсорбиращо състояние

Asubsetofagivenstatespace,suchasaMarkovchainAfterenteringthesubset,youcannotleave:,thenthesubsetisclosed,calledaclosedset,andallstatesoutsideaclosedsetarenotreachable.Ifthereisonlyonestateintheclosedset,thestateisanabsorptionstate,whichisaself-loopwithaprobabilityof1inthetransitiondiagram.Aclosedsetcanincludeoneormoreconnectedclasses.

3.Пример за диаграма на прехода

Here,anexampleofatransitiondiagramisusedtoillustratetheaboveconcepts:

BydefinitionItcanbeseenthatthetransitiongraphcontainsthreeconnectedclasses:,threeclosedsets:,andanabsorptionstate,state6.Notethatintheabovetransitiondiagram,theMarkovchainwilleventuallyentertheabsorbingstatefromanystate.ThistypeofMarkovchainiscalledanabsorbingMarkovchain.

Имоти

HerewedefinethefourpropertiesofMarkovchains:несводимост,recurrence,periodicityandергодичност.DifferentfromMarkovproperties,thesepropertiesarenotnecessarilythepropertiesoftheMarkovchain,butthepropertiesthatitexhibitstoitsstateduringthetransferprocess.Theabovepropertiesareallexclusive,thatis,Markovchainsthatarenotreduciblearenecessarilyirreducible,andsoon.

несводимост

IfthestatespaceofaMarkovchainhasonlyoneconnectedclass,thatis,allthemembersofthestatespace,thenTheMarkovchainisirreducible,otherwisetheMarkovchainhasreducibility(reducibility).TheнесводимостoftheMarkovchainmeansthatduringitsevolution,randomvariablescanbetransferredbetweenanystates.

Повторение

IftheMarkovchainreachesastate,itcanreturntothatstaterepeatedlyduringevolution,thenThestateisarecurrentstate,ortheMarkovchainhasa(local)recurrence,otherwiseithasatransient(transience).Formally,forastateinthestatespace,thereturntimeofaMarkovchainforagivenstateistheinfimumofallpossiblereturntimes:

If,thereisnotransientorrecurrenceinthisstate;if,thecriteriaforjudgingthetransientandrecurrenceofthisstateareasfollows:

Whenthetimesteptendstoinfinity,thereturnprobabilityoftherecurrentstate(returnprobability),thatis,theexpectationofthetotalnumberofvisitsalsotendstoinfinity:

Inaddition,ifthestateisrecurring,themeanrecurrencetimecanbecalculated:

Iftheaveragereturntimeis,thestatusis"positiverecurrent",otherwiseitis"nullrecurrent".Ifastateiszero-returning,itmeansthattheexpectationofthetimeintervalbetweentwovisitsoftheMarkovchaintothestateispositiveinfinity.

Fromtheabovedefinitionoftransientnessandrecurrence,thefollowinginferencescanbemade:

  1. Inference:forafinitenumberofstatesofMarrTheKofuchainhasatleastonerecurrentstate,andallrecurrentstatesarenormal.

  2. Inference:IfaMarkovchainwithafinitenumberofstatesisirreducible,allofitsstatesreturnnormally.

  3. Inference:IfstateAisrecursiveandstateBisreachablestateofA,thenAandBareconnected,AndBisafrequentreturn.

  4. Inference:IfstateBisthereachablestateofA,andstateBistheabsorbingstate,thenBistherecurrentstate,andAistheinstantaneousstate.Changestate.

  5. Inference:Thesetcomposedofthenormalreturnstateisaclosedset,butthestateintheclosedsetmaynotbearecurrentstate.

Периодичност

AMarkovchainthatreturnsnormallymayhaveperiodicity,thatis,initsDuringtheevolution,theMarkovchaincanoftenreturntoitsstatewithaperiodgreaterthan1.Formally,givenanormalreturnstate,itsreturnperiodiscalculatedasfollows:

wheremeansTakethegreatestcommondivisorofthesetelements.Forexample,ifinthetransitiondiagram,thenumberofstepsrequiredforaMarkovchaintoreturntoacertainstateis,thenitscycleis3,whichistheminimumnumberofstepsrequiredtoreturntothisstate.Ifiscalculatedaccordingtotheaboveformula,thestateisperiodic.If,thestateisaperiodicity.Fromthedefinitionofperiodicity,thefollowinginferencescanbemade:

  1. Inference:Theabsorptionstateisanon-periodicstate.

  2. Inference:IfstateAandstateBareconnected,thenAandBhavethesamecycle.

  3. Inference:IfanirreducibleMarkovchainhasaperiodicstateA,thenallstatesoftheMarkovchainareperiodicstates.

ергодичност

IfastateoftheMarkovchainisnormallyreturnedandaperiodic,Thestateisergodic.IfaMarkovchainisnotreducible,andacertainstateisergodic,thenallstatesoftheMarkovchainareergodic,whichiscalledtheergodicchain.Fromtheabovedefinition,theергодичностhasthefollowinginferences:

  1. Inference:IfstateAisanabsorbingstate,andAisthereachablestateofstateB,thenAistraversed,Bisnottraversed.

  2. Inference:IfaMarkovchainofmultiplestatescontainsabsorbingstates,thentheMarkovchainisnotanergodicchain.

  3. Inference:IfaMarkovchainofmultiplestatesformsadirectedacyclicgraph,orasingleclosedloop,thentheMarkovchainisnotTraversethechain.

Theergodicchainisanon-periodicstableMarkovchainwithsteady-statebehavioronalong-termscale,soitisaMarkovchainthathasbeenwidelystudiedandapplied.

Анализ на стационарно състояние

Hereweintroducethedescriptionofthelong-timescalebehaviorofMarkovchain,namelystationarydistributionandlimitdistribution,anddefinestationaryMarkovchain.

Stationarydistribution(stationarydistribution)

GivenaMarkovchain,ifthereisaprobabilitydistributioninitsstatespaceandthedistributionmeetsthefollowingconditions:

ThenisthestationarydistributionoftheMarkovchain.Whereisthetransitionmatrixandtransitionprobability.Thelinearequationsontherightsideoftheequivalentsymbolarecalledbalanceequations.Further,ifastationarydistributionoftheMarkovchainexists,anditsinitialdistributionisastationarydistribution,thentheMarkovchainisinasteadystate.Fromageometricpointofview,considerthestationarydistributionofMarkovchainsas,sothesupportsetofthisdistributionisastandardsimplex.

ForanirreducibleMarkovchain,ifandonlyifithasauniquestationarydistribution,thatis,whenthebalanceequationhasauniquesolutiononthepositivesimplex,theMarkovchainThechainisreturnednormally,andthestationarydistributionisexpressedasfollows:

Theaboveconclusioniscalledthestationarydistributioncriterion.ForirreducibleandrecurrentMarkovchains,solvingthebalanceequationcangettheonlyeigenvectorexceptthescale,thatis,theinvariantmeasure.IftheMarkovchainisnormallyrecursive,itsstationarydistributionisTheeigenvectorwhentheeigenvalueis1isobtainedwhensolvingthebalanceequation,thatis,theinvariantmeasureafternormalization.Therefore,thenecessaryandsufficientconditionfortheMarkovchaintohaveastabledistributionisthatithasanormalreturnstate.Inaddition,itcanbeknownbyexamplethatifaMarkovchaincontainsmultipleconnectedclassescomposedofnormalreturnstates(thepropertyshowsthattheyareallclosedsets,sotheMarkovchainisnotnormalreturn),theneachconnectedclasshasHaveastationarydistribution,andthesteadystateoftheevolutiondependsontheinitialdistribution.

Limitingdistribution(limitingdistribution)

IfthereisaprobabilitydistributioninthestatespaceofaMarkovchain

andsatisfythefollowingrelationship:ThenthedistributionisthelimitdistributionoftheMarkovchain.Notethatthedefinitionofthelimitdistributionhasnothingtodowiththeinitialdistribution,thatis,foranyinitialdistribution,whenthetimesteptendstoinfinity,theprobabilitydistributionoftherandomvariabletendstothelimitdistribution.Bydefinition,thelimitdistributionmustbeastationarydistribution,buttheoppositeisnottrue.Forexample,aperiodicMarkovchainmayhaveastationarydistribution,buttheperiodicMarkovchaindoesnotconvergetoanydistribution,anditsstationarydistributionisnotalimitdistribution.

1. Пределна теорема

Twoindependentnon-cyclicstationaryMarkovchains,thatis,ifthetraversalchainhasthesametransitionmatrix,Thenwhenthetimesteptendstoinfinity,thedifferencebetweenthetwolimitdistributionstendstozero.Accordingtothecouplingtheoryinrandomprocesses,theconclusionisexpressedas:forthesametraversalchaininthestatespace,givenanyinitialdistribution,thereis:

Wheremeanssupremum.Consideringthenatureofthestationarydistribution,thisconclusionhasacorollary:forthetraversalchain,whenthetimesteptendstoinfinity,itslimitdistributiontendstobeastationarydistribution:

section>ThisconclusionissometimesreferredtoasthelimittheoremofMarkovchain(limittheoremofMarkovchain),indicatingthatiftheMarkovchainisergodic,itslimitdistributionisastationarydistribution.Foranirreducibleandnon-periodicMarkovchain,theergodicchainisequivalenttotheexistenceofitslimitdistribution,anditisalsoequivalenttotheexistenceofitsstabledistribution.

2. Ергодична теорема (ergodictheorem)

IfaMarkovchainisanergodicchain,thenbytheergodictheorem,theTheratioofthenumberofvisitstothetimestepapproachesthereciprocaloftheaveragereturntimewhenthetimestepapproachesinfinity,thatis,thestationaryorlimitingdistributionofthestate:

TraversaltheoremTheproofofreliesontheStrongLawofLargeNumbers(SLLN),whichshowsthatregardlessoftheinitialdistributionofthetraversalchain,afterasufficientlylongevolution,multipleobservations(limittheorem)ofoneoftherandomvariablesandmultipleOneobservationofrandomvariables(leftsideoftheaboveequation)cangetanapproximationofthelimitdistribution.Sincetheergodicchainsatisfiesthelimittheoremandtheergodictheorem,MCMCbuildstheergodicchaintoensurethatitconvergestoastabledistributionduringiteration.

Стационарна верига на Марков

IfaMarkovchainhasauniquestationarydistributionandthelimitdistributionconvergestoastationarydistribution,Bydefinition,itisequivalenttothattheMarkovchainisastationaryMarkovchain.AstationaryMarkovchainisastrictlystationaryrandomprocess,anditsevolutionhasnothingtodowiththetimesequence:

ItcanbeknownfromthelimittheoremthatthetraversalchainisastationaryMarkovchain.Inaddition,fromtheabovedefinition,thetransitionmatrixofastationaryMarkovchainisaconstantmatrix,andthen-steptransitionmatrixis​​then-thpoweroftheconstantmatrix.AstationaryMarkovchainisalsocalledatime-homogeneousMarkovchain.Correspondingtoit,iftheMarkovchaindoesnotmeettheaboveconditions,itiscalledanon-stationaryMarkovchain(non-stationaryMarkvochain)ornon-homogeneousMarkovchain(time-inhomogeneousMarkovchain).

IfastationaryMarkovchainsatisfiesthedetailedbalanceconditionforanytwostates,ithasreversibilityandiscalledareversibleMarkovchain(reversibleMarkovchain):

ThereversibilityofaMarkovchainisastricterнесводимост,thatis,itcannotonlytransferbetweenanystates,butalsotransfertoeachstateTheprobabilitiesareequal,sothereversibleMarkovchainisasufficientnon-essentialconditionforastableMarkovchain.InMarkvochainMonteCarlo(MCMC),constructingareversibleMarkovchainthatsatisfiesthefinebalanceconditionisamethodtoensurethatthesamplingdistributionisastabledistribution.

Специален случай

Bernoulliprocess(Bernoulliprocess)

Mainentry:Bernoulliprocess

TheBernoulliprocessisalsocalledBinomialMarkovchain.Itsestablishmentprocessisasfollows:GivenaseriesIndependent"identifications",eachidentificationisbinomial,andtheprobabilityistakenaspositiveandnegative.Letthepositiverandomprocessrepresenttheprobabilityofkpositiveidentifiersinnidentifiers,thenitisaBernoulliprocessinwhichtherandomvariablesobeythebinomialdistribution:

Fromtheestablishmentprocess,itcanbeseenthattheprobabilityofpositivesignsintheaddednewsignshasnothingtodowiththenumberofpreviouspositivesigns,andhasMarkovproperties,sotheBernoulliprocessisaMarkovchain.

Thegambler'sruinproblem(gambler'sruin)

Вижте: Gambler'sruin

Assumingthatthegamblerholdsalimitednumberofchipstobetinthecasino,eachbetwillwinorloseonechipwiththeprobabilityof.Ifthegamblerkeepsbetting,thetotalnumberofchipsheholdsisamarkIthasthefollowingtransfermatrix:

Thegambler’slightbetistheabsorptionstate,whichcanbeknownfromone-stepanalysis.When

Atthetime,theMarkovchainwillinevitablyentertheabsorptionstate,thatis,nomatterhowmanychipsthegamblerholds,itwilleventuallyloseoutasthebetprogresses.

Randomwalk(randomwalk)

Mainentry:RandomWandering

Defineaseriesofindependentidenticallydistributed(iid)integerrandomvariables,anddefinethefollowingrandomprocess:

Therandomprocessisarandomwalkintheintegerset,andisthesteplength.Sincethesteplengthisiid,thecurrentstepandthepreviousstepareindependentofeachother,andthisrandomprocessisaMarkovchain.BoththeBernoulliprocessandthebankruptcyproblemofgamblersarespecialcasesofrandomwalk.

Fromtheaboveexampleofrandomwalk,wecanseethatMarkovchainshavegeneralconstructionmethods.Specifically,iftherandomprocessinthestatespaceis

Therearesatisfyingforms:

Amongthem,andareiidrandomvariablesofspaceandIndependentof,therandomprocessisaMarkovchain,anditsone-steptransitionprobabilityis:.ThisconclusionshowsthattheMarkovchaincanbenumericallysimulatedbyrandomvariables(randomnumbers)thatfollowtheuniformdistributionofiidintheintervalof.

Промоция

марковски процес

Mainentry:марковски процес

марковски процесisalsocalledcontinuoustimeMarkovchain,itisMarkovchainordiscretetimeMarkovchainInthegeneralizationof,itsstatespaceisacountableset,buttheone-dimensionalindexsetnolongerhasthelimitofacountablesetandcanrepresentcontinuoustime.Thepropertiesofaмарковски процесandaMarkovchaincanbecompared,anditsMarkovpropertyisusuallyexpressedasfollows:

Sincethestatespaceoftheмарковски процесisForasetofnumbers,thesampletrackincontinuoustimeisalmostnecessarily(as)arightcontinuoussegmentfunction,sotheмарковски процесcanbeexpressedasajumpprocessandrelatedtotheMarkovchain:

whereisthesojourntimeofacertainstate,andisthesequentialindexsetmember(timesegment).TheMarkovchainandtheresidencetimesatisfyingtheaboverelationshiparethejumpingprocesslocallyembeddedprocessunderalimitedtimesegment..

Markovmodel(Markovmodel)

Основна статия: Марков модел

Markovchainorмарковски процесisnottheonlyrandomprocessbasedonMarkovproperty.Infact,hiddenMarkovmodel,Markovdecisionprocess,MarkovrandomprocessStochasticprocesses/randommodelssuchasairportshaveMarkovpropertiesandarecollectivelyreferredtoasMarkovmodels.HereisabriefintroductiontoothermembersoftheMarkovmodel:

1.HiddenMarkovModel(HiddenMarkovModel,HMM)

HMMisAstatespaceisnotcompletelyvisible,thatis,aMarkovchaincontaininghiddenstatus.ThevisiblepartoftheHMMiscalledtheemissionstate,whichisrelatedtothehiddenstate,butitisnotenoughtoformacompletecorrespondence.Takespeechrecognitionasanexample.Thesentencethatneedstoberecognizedisinaninvisiblehiddenstate,andthereceivedvoiceoraudioistheoutputstaterelatedtothesentence.Atthistime,thecommonapplicationofHMMisbasedontheMarkovnatureandderivedfromthevoiceinput.Thecorrespondingsentenceistoreversethehiddenstatefromtheoutputstate.

2.Markovdecisionprocess(Markovdecisionprocess,MDP):

MDPintroduces"actions"onthebasisofstatespaceTheMarkovchain,thatis,thetransitionprobabilityoftheMarkovchainisnotonlyrelatedtothecurrentstate,butalsorelatedtothecurrentaction.MDPincludesasetofinteractiveobjects,namelyagentandenvironment,anddefines5modelelements:state,action,policy,rewardandreturn.AmongthemStrategyisthemappingfromstatetoaction,andrewardisthediscountoraccumulationofrewardovertime.IntheevolutionofMDP,theagentperceivestheinitialstateoftheenvironment,implementsactionsaccordingtothestrategy,andtheenvironmententersanewstateundertheinfluenceoftheactionandfeedsbacktotheagentareward.Theagentreceivesthe"reward"andadoptsnewstrategiestocontinuouslyinteractwiththeenvironment.MDPisoneofthemathematicalmodelsofreinforcementlearning(reinforcementlearning),whichisusedtosimulatetherandomnessstrategiesandrewardsthatcanbeachievedbyagents.OneofthepopularizationsofMDPisthepartiallyobservableMarkovdecisionprocess(POMDP),whichconsidersthehiddenstateandoutputstateoftheMDPintheHMM.

3.MarkovRandomField(MarkovRandomField,MRF)

MRFisaMarkovchainfromone-dimensionalindexsettohigh-dimensionalspaceПромоция.TheMarkovpropertyofMRFshowsthatthestateofanyrandomvariableisonlydeterminedbythestateofallitsadjacentrandomvariables.Analogoustothefinite-dimensionaldistributioninaMarkovchain,thejointprobabilitydistributionofarandomvariableinMRFistheproductofallcliquescontainingtherandomvariable.ThemostcommonexampleofMRFistheIsingmodel.

Harrischain

HarrischainisthegeneralizationofMarkovchainfromcountablestatespacetocontinuousstatespace,givenThestationaryMarkovchainonthemeasurablespace,ifanysubsetofthemeasurablespaceandthereturntimeofthesubset

,theMarkovchainsatisfies:

тогава веригата на Марков е верига на Харис, където е измеримо пространствоΣ-крайна мярка (σ-крайна мярка).

Приложение

MCMC

BuildingaMarkovchainwithsamplingdistributionasthelimitdistributionisMarkovChainMonteCarlo(MarkovChainMonteCarlo,MCMC)ThecorestepofMCMCistocontinuouslyiteratethetimestepsontheMarkovchaintoobtainrandomnumbersthatapproximatelyobeythesamplingdistribution,andusetheserandomnumberstoapproximatethemathematicalexpectationofthetargetforthesamplingdistribution:

ThelimitdistributionnatureofMarkovchaindeterminesthatMCMCisunbiasedestimation,thatis,whenthenumberofsamplestendstoinfinity,thetruevalueofthemathematicalexpectationofsolvingthetargetwillbeobtained.ThiscombinesMCMCanditsalternativemethod,Forexample,variationalBayesianinference(variationalBayesianinference)isdistinguished,thelatterisusuallylesscomputationallyexpensivethanMCMC,butitcannotguaranteeanunbiasedestimate.

други

Inphysicsandchemistry,Markovchainsandмарковски процесesareusedtomodeldynamicsystems,formingMarkovdynamics(Markovdynamics).dynamics).Inthequeueingtheory,theMarkovchainisthebasicmodelofthequeuingprocess.Intermsofsignalprocessing,Markovchainsaresomesequentialdatacompressionalgorithms,suchasthemathematicalmodelofZiv-Lempelcoding.Inthefinancialfield,theMarkovchainmodelisusedtopredictthemarketshareofenterpriseproducts.

Related Articles
TOP