School of Business Scholarly Workshttps://hdl.handle.net/1808/1322024-05-25T17:05:08Z2024-05-25T17:05:08ZComputing the decomposable entropy of belief-function graphical modelsJiroušek, RadimKratochvíl, VáclavShenoy, Prakash P.https://hdl.handle.net/1808/347112023-08-11T06:07:17Z2023-07-13T00:00:00ZComputing the decomposable entropy of belief-function graphical models
Jiroušek, Radim; Kratochvíl, Václav; Shenoy, Prakash P.
In 2018, Jiroušek and Shenoy proposed a definition of entropy for Dempster-Shafer (D-S) belief functions called decomposable entropy (d-entropy). This paper provides an algorithm for computing the d-entropy of directed graphical D-S belief function models. We illustrate the algorithm using Almond's Captain's Problem example. For belief function undirected graphical models, assuming that the set of belief functions in the model is non-informative, the belief functions are distinct. We illustrate this using Haenni-Lehmann's Communication Network problem. As the joint belief function for this model is quasi-consonant, it follows from a property of d-entropy that the d-entropy of this model is zero, and no algorithm is required. For a class of undirected graphical models, we provide an algorithm for computing the d-entropy of such models. Finally, the d-entropy coincides with Shannon's entropy for the probability mass function of a single random variable and for a large multi-dimensional probability distribution expressed as a directed acyclic graph model called a Bayesian network. We illustrate this using Lauritzen-Spiegelhalter's Chest Clinic example represented as a belief-function directed graphical model.
2023-07-13T00:00:00ZOn conditional belief functions in directed graphical models in the Dempster-Shafer theoryJiroušek, RadimKratochvíl, VáclavShenoy, Prakash P.https://hdl.handle.net/1808/347102023-08-11T06:06:15Z2023-07-04T00:00:00ZOn conditional belief functions in directed graphical models in the Dempster-Shafer theory
Jiroušek, Radim; Kratochvíl, Václav; Shenoy, Prakash P.
The primary goal is to define conditional belief functions in the Dempster-Shafer theory. We do so similarly to probability theory's notion of conditional probability tables. Conditional belief functions are necessary for constructing directed graphical belief function models in the same sense as conditional probability tables are necessary for constructing Bayesian networks. We provide examples of conditional belief functions, including those obtained by Smets' conditional embedding. Besides defining conditional belief functions, we state and prove a few basic properties of conditionals. In the belief-function literature, conditionals are defined starting from a joint belief function. Conditionals are then defined using the removal operator, an inverse of Dempster's combination operator. When such conditionals are well-defined belief functions, we show that our definition is equivalent to these definitions.
2023-07-04T00:00:00ZMaking inferences in incomplete Bayesian networks: A Dempster-Shafer belief function approachShenoy, Prakash P.https://hdl.handle.net/1808/347012023-08-10T06:08:08Z2023-09-01T00:00:00ZMaking inferences in incomplete Bayesian networks: A Dempster-Shafer belief function approach
Shenoy, Prakash P.
How do you make inferences from a Bayesian network (BN) model with missing information? For example, we may not have priors for some variables or may not have conditionals for some states of the parent variables. It is well-known that the Dempster-Shafer (D-S) belief function theory is a generalization of probability theory. So, a solution is to embed an incomplete BN model in a D-S belief function model, omit the missing data, and then make inferences from the belief function model. We will demonstrate this using an implementation of a local computation algorithm for D-S belief function models called the “Belief function machine.” One advantage of this approach is that we get interval estimates of the probabilities of interest. Using Laplacian (equally likely) or maximum entropy priors or conditionals for missing data in a BN may lead to point estimates for the probabilities of interest, masking the uncertainty in these estimates. Bayesian reasoning cannot reason from an incomplete model. A Bayesian sensitivity analysis of the missing parameters is not a substitute for a belief-function analysis.
2023-09-01T00:00:00ZInjury Care: A Missing Piece in the December 15th, 2022 US-Africa Leaders’ SummitEpie, Terrence B.Patel, OjasAyres, JackHaley, AlbersReddy, RavaliMetumah, WendyGartner, AustinDallman, JohnathanHeddings, Archiehttps://hdl.handle.net/1808/341492023-05-10T06:07:20Z2023-02-21T00:00:00ZInjury Care: A Missing Piece in the December 15th, 2022 US-Africa Leaders’ Summit
Epie, Terrence B.; Patel, Ojas; Ayres, Jack; Haley, Albers; Reddy, Ravali; Metumah, Wendy; Gartner, Austin; Dallman, Johnathan; Heddings, Archie
2023-02-21T00:00:00Z