953 resultados para model complexity
Resumo:
I develop a model of endogenous bounded rationality due to search costs, arising implicitly from the problems complexity. The decision maker is not required to know the entire structure of the problem when making choices but can think ahead, through costly search, to reveal more of it. However, the costs of search are not assumed exogenously; they are inferred from revealed preferences through her choices. Thus, bounded rationality and its extent emerge endogenously: as problems become simpler or as the benefits of deeper search become larger relative to its costs, the choices more closely resemble those of a rational agent. For a fixed decision problem, the costs of search will vary across agents. For a given decision maker, they will vary across problems. The model explains, therefore, why the disparity, between observed choices and those prescribed under rationality, varies across agents and problems. It also suggests, under reasonable assumptions, an identifying prediction: a relation between the benefits of deeper search and the depth of the search. As long as calibration of the search costs is possible, this can be tested on any agent-problem pair. My approach provides a common framework for depicting the underlying limitations that force departures from rationality in different and unrelated decision-making situations. Specifically, I show that it is consistent with violations of timing independence in temporal framing problems, dynamic inconsistency and diversification bias in sequential versus simultaneous choice problems, and with plausible but contrasting risk attitudes across small- and large-stakes gambles.
Resumo:
Species distribution models (SDMs) are widely used to explain and predict species ranges and environmental niches. They are most commonly constructed by inferring species' occurrence-environment relationships using statistical and machine-learning methods. The variety of methods that can be used to construct SDMs (e.g. generalized linear/additive models, tree-based models, maximum entropy, etc.), and the variety of ways that such models can be implemented, permits substantial flexibility in SDM complexity. Building models with an appropriate amount of complexity for the study objectives is critical for robust inference. We characterize complexity as the shape of the inferred occurrence-environment relationships and the number of parameters used to describe them, and search for insights into whether additional complexity is informative or superfluous. By building 'under fit' models, having insufficient flexibility to describe observed occurrence-environment relationships, we risk misunderstanding the factors shaping species distributions. By building 'over fit' models, with excessive flexibility, we risk inadvertently ascribing pattern to noise or building opaque models. However, model selection can be challenging, especially when comparing models constructed under different modeling approaches. Here we argue for a more pragmatic approach: researchers should constrain the complexity of their models based on study objective, attributes of the data, and an understanding of how these interact with the underlying biological processes. We discuss guidelines for balancing under fitting with over fitting and consequently how complexity affects decisions made during model building. Although some generalities are possible, our discussion reflects differences in opinions that favor simpler versus more complex models. We conclude that combining insights from both simple and complex SDM building approaches best advances our knowledge of current and future species ranges.
Resumo:
Every year, debris flows cause huge damage in mountainous areas. Due to population pressure in hazardous zones, the socio-economic impact is much higher than in the past. Therefore, the development of indicative susceptibility hazard maps is of primary importance, particularly in developing countries. However, the complexity of the phenomenon and the variability of local controlling factors limit the use of processbased models for a first assessment. A debris flow model has been developed for regional susceptibility assessments using digital elevation model (DEM) with a GIS-based approach.. The automatic identification of source areas and the estimation of debris flow spreading, based on GIS tools, provide a substantial basis for a preliminary susceptibility assessment at a regional scale. One of the main advantages of this model is its workability. In fact, everything is open to the user, from the data choice to the selection of the algorithms and their parameters. The Flow-R model was tested in three different contexts: two in Switzerland and one in Pakistan, for indicative susceptibility hazard mapping. It was shown that the quality of the DEM is the most important parameter to obtain reliable results for propagation, but also to identify the potential debris flows sources.
Resumo:
General introductionThe Human Immunodeficiency/Acquired Immunodeficiency Syndrome (HIV/AIDS) epidemic, despite recent encouraging announcements by the World Health Organization (WHO) is still today one of the world's major health care challenges.The present work lies in the field of health care management, in particular, we aim to evaluate the behavioural and non-behavioural interventions against HIV/AIDS in developing countries through a deterministic simulation model, both in human and economic terms. We will focus on assessing the effectiveness of the antiretroviral therapies (ART) in heterosexual populations living in lesser developed countries where the epidemic has generalized (formerly defined by the WHO as type II countries). The model is calibrated using Botswana as a case study, however our model can be adapted to other countries with similar transmission dynamics.The first part of this thesis consists of reviewing the main mathematical concepts describing the transmission of infectious agents in general but with a focus on human immunodeficiency virus (HIV) transmission. We also review deterministic models assessing HIV interventions with a focus on models aimed at African countries. This review helps us to recognize the need for a generic model and allows us to define a typical structure of such a generic deterministic model.The second part describes the main feed-back loops underlying the dynamics of HIV transmission. These loops represent the foundation of our model. This part also provides a detailed description of the model, including the various infected and non-infected population groups, the type of sexual relationships, the infection matrices, important factors impacting HIV transmission such as condom use, other sexually transmitted diseases (STD) and male circumcision. We also included in the model a dynamic life expectancy calculator which, to our knowledge, is a unique feature allowing more realistic cost-efficiency calculations. Various intervention scenarios are evaluated using the model, each of them including ART in combination with other interventions, namely: circumcision, campaigns aimed at behavioral change (Abstain, Be faithful or use Condoms also named ABC campaigns), and treatment of other STD. A cost efficiency analysis (CEA) is performed for each scenario. The CEA consists of measuring the cost per disability-adjusted life year (DALY) averted. This part also describes the model calibration and validation, including a sensitivity analysis.The third part reports the results and discusses the model limitations. In particular, we argue that the combination of ART and ABC campaigns and ART and treatment of other STDs are the most cost-efficient interventions through 2020. The main model limitations include modeling the complexity of sexual relationships, omission of international migration and ignoring variability in infectiousness according to the AIDS stage.The fourth part reviews the major contributions of the thesis and discusses model generalizability and flexibility. Finally, we conclude that by selecting the adequate interventions mix, policy makers can significantly reduce the adult prevalence in Botswana in the coming twenty years providing the country and its donors can bear the cost involved.Part I: Context and literature reviewIn this section, after a brief introduction to the general literature we focus in section two on the key mathematical concepts describing the transmission of infectious agents in general with a focus on HIV transmission. Section three provides a description of HIV policy models, with a focus on deterministic models. This leads us in section four to envision the need for a generic deterministic HIV policy model and briefly describe the structure of such a generic model applicable to countries with generalized HIV/AIDS epidemic, also defined as pattern II countries by the WHO.
Resumo:
Background Demand for home care services has increased considerably, along with the growing complexity of cases and variability among resources and providers. Designing services that guarantee co-ordination and integration for providers and levels of care is of paramount importance. The aim of this study is to determine the effectiveness of a new case-management based, home care delivery model which has been implemented in Andalusia (Spain). Methods Quasi-experimental, controlled, non-randomised, multi-centre study on the population receiving home care services comparing the outcomes of the new model, which included nurse-led case management, versus the conventional one. Primary endpoints: functional status, satisfaction and use of healthcare resources. Secondary endpoints: recruitment and caregiver burden, mortality, institutionalisation, quality of life and family function. Analyses were performed at base-line, and at two, six and twelve months. A bivariate analysis was conducted with the Student's t-test, Mann-Whitney's U, and the chi squared test. Kaplan-Meier and log-rank tests were performed to compare survival and institutionalisation. A multivariate analysis was performed to pinpoint factors that impact on improvement of functional ability. Results Base-line differences in functional capacity – significantly lower in the intervention group (RR: 1.52 95%CI: 1.05–2.21; p = 0.0016) – disappeared at six months (RR: 1.31 95%CI: 0.87–1.98; p = 0.178). At six months, caregiver burden showed a slight reduction in the intervention group, whereas it increased notably in the control group (base-line Zarit Test: 57.06 95%CI: 54.77–59.34 vs. 60.50 95%CI: 53.63–67.37; p = 0.264), (Zarit Test at six months: 53.79 95%CI: 49.67–57.92 vs. 66.26 95%CI: 60.66–71.86 p = 0.002). Patients in the intervention group received more physiotherapy (7.92 CI95%: 5.22–10.62 vs. 3.24 95%CI: 1.37–5.310; p = 0.0001) and, on average, required fewer home care visits (9.40 95%CI: 7.89–10.92 vs.11.30 95%CI: 9.10–14.54). No differences were found in terms of frequency of visits to A&E or hospital re-admissions. Furthermore, patients in the control group perceived higher levels of satisfaction (16.88; 95%CI: 16.32–17.43; range: 0–21, vs. 14.65 95%CI: 13.61–15.68; p = 0,001). Conclusion A home care service model that includes nurse-led case management streamlines access to healthcare services and resources, while impacting positively on patients' functional ability and caregiver burden, with increased levels of satisfaction.
Resumo:
The effects of the nongray absorption (i.e., atmospheric opacity varying with wavelength) on the possible upper bound of the outgoing longwave radiation (OLR) emitted by a planetary atmosphere have been examined. This analysis is based on the semigray approach, which appears to be a reasonable compromise between the complexity of nongray models and the simplicity of the gray assumption (i.e., atmospheric absorption independent of wavelength). Atmospheric gases in semigray atmospheres make use of constant absorption coefficients in finite-width spectral bands. Here, such a semigray absorption is introduced in a one-dimensional (1D) radiative– convective model with a stratosphere in radiative equilibrium and a troposphere fully saturated with water vapor, which is the semigray gas. A single atmospheric window in the infrared spectrum has been assumed. In contrast to the single absolute limit of OLR found in gray atmospheres, semigray ones may also show a relative limit. This means that both finite and infinite runaway effects may arise in some semigray cases. Of particular importance is the finding of an entirely new branch of stable steady states that does not appear in gray atmospheres. This new multiple equilibrium is a consequence of the nongray absorption only. It is suspected that this new set of stable solutions has not been previously revealed in analyses of radiative–convective models since it does not appear for an atmosphere with nongray parameters similar to those for the earth’s current state
Resumo:
Business organisations are excellent representations of what in physics and mathematics are designated "chaotic" systems. Because a culture of innovation will be vital for organisational survival in the 21st century, the present paper proposes that viewing organisations in terms of "complexity theory" may assist leaders in fine-tuning managerial philosophies that provide orderly management emphasizing stability within a culture of organised chaos, for it is on the "boundary of chaos" that the greatest creativity occurs. It is argued that 21st century companies, as chaotic social systems, will no longer be effectively managed by rigid objectives (MBO) nor by instructions (MBI). Their capacity for self-organisation will be derived essentially from how their members accept a shared set of values or principles for action (MBV). Complexity theory deals with systems that show complex structures in time or space, often hiding simple deterministic rules. This theory holds that once these rules are found, it is possible to make effective predictions and even to control the apparent complexity. The state of chaos that self-organises, thanks to the appearance of the "strange attractor", is the ideal basis for creativity and innovation in the company. In this self-organised state of chaos, members are not confined to narrow roles, and gradually develop their capacity for differentiation and relationships, growing continuously toward their maximum potential contribution to the efficiency of the organisation. In this way, values act as organisers or "attractors" of disorder, which in the theory of chaos are equations represented by unusually regular geometric configurations that predict the long-term behaviour of complex systems. In business organisations (as in all kinds of social systems) the starting principles end up as the final principles in the long term. An attractor is a model representation of the behavioral results of a system. The attractor is not a force of attraction or a goal-oriented presence in the system; it simply depicts where the system is headed based on its rules of motion. Thus, in a culture that cultivates or shares values of autonomy, responsibility, independence, innovation, creativity, and proaction, the risk of short-term chaos is mitigated by an overall long-term sense of direction. A more suitable approach to manage the internal and external complexities that organisations are currently confronting is to alter their dominant culture under the principles of MBV.
Resumo:
The problems arising in commercial distribution are complex and involve several players and decision levels. One important decision is relatedwith the design of the routes to distribute the products, in an efficient and inexpensive way.This article deals with a complex vehicle routing problem that can beseen as a new extension of the basic vehicle routing problem. The proposed model is a multi-objective combinatorial optimization problemthat considers three objectives and multiple periods, which models in a closer way the real distribution problems. The first objective is costminimization, the second is balancing work levels and the third is amarketing objective. An application of the model on a small example, with5 clients and 3 days, is presented. The results of the model show the complexity of solving multi-objective combinatorial optimization problems and the contradiction between the several distribution management objective.
Resumo:
Business organisations are excellent representations of what in physics and mathematics are designated "chaotic" systems. Because a culture of innovation will be vital for organisational survival in the 21st century, the present paper proposes that viewing organisations in terms of "complexity theory" may assist leaders in fine-tuning managerial philosophies that provide orderly management emphasizing stability within a culture of organised chaos, for it is on the "boundary of chaos" that the greatest creativity occurs. It is argued that 21st century companies, as chaotic social systems, will no longer be effectively managed by rigid objectives (MBO) nor by instructions (MBI). Their capacity for self-organisation will be derived essentially from how their members accept a shared set of values or principles for action (MBV). Complexity theory deals with systems that show complex structures in time or space, often hiding simple deterministic rules. This theory holds that once these rules are found, it is possible to make effective predictions and even to control the apparent complexity. The state of chaos that self-organises, thanks to the appearance of the "strange attractor", is the ideal basis for creativity and innovation in the company. In this self-organised state of chaos, members are not confined to narrow roles, and gradually develop their capacity for differentiation and relationships, growing continuously toward their maximum potential contribution to the efficiency of the organisation. In this way, values act as organisers or "attractors" of disorder, which in the theory of chaos are equations represented by unusually regular geometric configurations that predict the long-term behaviour of complex systems. In business organisations (as in all kinds of social systems) the starting principles end up as the final principles in the long term. An attractor is a model representation of the behavioral results of a system. The attractor is not a force of attraction or a goal-oriented presence in the system; it simply depicts where the system is headed based on its rules of motion. Thus, in a culture that cultivates or shares values of autonomy, responsibility, independence, innovation, creativity, and proaction, the risk of short-term chaos is mitigated by an overall long-term sense of direction. A more suitable approach to manage the internal and external complexities that organisations are currently confronting is to alter their dominant culture under the principles of MBV.
Resumo:
Given $n$ independent replicates of a jointly distributed pair $(X,Y)\in {\cal R}^d \times {\cal R}$, we wish to select from a fixed sequence of model classes ${\cal F}_1, {\cal F}_2, \ldots$ a deterministic prediction rule $f: {\cal R}^d \to {\cal R}$ whose risk is small. We investigate the possibility of empirically assessingthe {\em complexity} of each model class, that is, the actual difficulty of the estimation problem within each class. The estimated complexities are in turn used to define an adaptive model selection procedure, which is based on complexity penalized empirical risk.The available data are divided into two parts. The first is used to form an empirical cover of each model class, and the second is used to select a candidate rule from each cover based on empirical risk. The covering radii are determined empirically to optimize a tight upper bound on the estimation error. An estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical risk. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.Finite sample performance bounds are established for the estimates, and these bounds are applied to several non-parametric estimation problems. The estimates are shown to achieve a favorable tradeoff between approximation and estimation error, and to perform as well as if the distribution-dependent complexities of the model classes were known beforehand. In addition, it is shown that the estimate can be consistent,and even possess near optimal rates of convergence, when each model class has an infinite VC or pseudo dimension.For regression estimation with squared loss we modify our estimate to achieve a faster rate of convergence.
Resumo:
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.
Resumo:
Abstract The complexity of the current business world is making corporate disclosure more and more important for information users. These users, including investors, financial analysts, and government authorities rely on the disclosed information to make their investment decisions, analyze and recommend shares, and to draft regulation policies. Moreover, the globalization of capital markets has raised difficulties for information users in understanding the differences incorporate disclosure across countries and across firms. Using a sample of 797 firms from 34 countries, this thesis advances the literature on disclosure by illustrating comprehensively the disclosure determinants originating at firm systems and national systems based on the multilevel latent variable approach. Under this approach, the overall variation associated with the firm-specific variables is decomposed into two parts, the within-country and the between-country part. Accordingly, the model estimates the latent association between corporate disclosure and information demand at two levels, the within-country and the between-country level. The results indicate that the variables originating from corporate systems are hierarchically correlated with those from the country environment. The information demand factor indicated by the number of exchanges listed and the number of analyst recommendations can significantly explain the variation of corporate disclosure for both "within" and "between" countries. The exogenous influences of firm fundamentals-firm size and performance-are exerted indirectly through the information demand factor. Specifically, if the between-country variation in firm variables is taken into account, only the variables of legal systems and economic growth keep significance in explaining the disclosure differences across countries. These findings strongly support the hypothesis that disclosure is a response to both corporate systems and national systems, but the influence of the latter on disclosure reflected significantly through that of the former. In addition, the results based on ADR (American Depositary Receipt) firms suggest that the globalization of capital markets is harmonizing the disclosure behavior of cross-boundary listed firms, but it cannot entirely eliminate the national features in disclosure and other firm-specific characteristics.
Resumo:
The predictive potential of six selected factors was assessed in 72 patients with primary myelodysplastic syndrome using univariate and multivariate logistic regression analysis of survival at 18 months. Factors were age (above median of 69 years), dysplastic features in the three myeloid bone marrow cell lineages, presence of chromosome defects, all metaphases abnormal, double or complex chromosome defects (C23), and a Bournemouth score of 2, 3, or 4 (B234). In the multivariate approach, B234 and C23 proved to be significantly associated with a reduction in the survival probability. The similarity of the regression coefficients associated with these two factors means that they have about the same weight. Consequently, the model was simplified by counting the number of factors (0, 1, or 2) present in each patient, thus generating a scoring system called the Lausanne-Bournemouth score (LB score). The LB score combines the well-recognized and easy-to-use Bournemouth score (B score) with the chromosome defect complexity, C23 constituting an additional indicator of patient outcome. The predicted risk of death within 18 months calculated from the model is as follows: 7.1% (confidence interval: 1.7-24.8) for patients with an LB score of 0, 60.1% (44.7-73.8) for an LB score of 1, and 96.8% (84.5-99.4) for an LB score of 2. The scoring system presented here has several interesting features. The LB score may improve the predictive value of the B score, as it is able to recognize two prognostic groups in the intermediate risk category of patients with B scores of 2 or 3. It has also the ability to identify two distinct prognostic subclasses among RAEB and possibly CMML patients. In addition to its above-described usefulness in the prognostic evaluation, the LB score may bring new insights into the understanding of evolution patterns in MDS. We used the combination of the B score and chromosome complexity to define four classes which may be considered four possible states of myelodysplasia and which describe two distinct evolutional pathways.
Resumo:
Soil organic matter (SOM) plays an important role in carbon (C) cycle and soil quality. Considering the complexity of factors that control SOM cycling and the long time it usually takes to observe changes in SOM stocks, modeling constitutes a very important tool to understand SOM cycling in forest soils. The following hypotheses were tested: (i) soil organic carbon (SOC) stocks would be higher after several rotations of eucalyptus than in low-productivity pastures; (ii) SOC values simulated by the Century model would describe the data better than the mean of observations. So, the aims of the current study were: (i) to evaluate the SOM dynamics using the Century model to simulate the changes of C stocks for two eucalyptus chronosequences in the Rio Doce Valley, Minas Gerais State, Brazil; and (ii) to compare the C stocks simulated by Century with the C stocks measured in soils of different Orders and regions of the Rio Doce Valley growing eucalyptus. In Belo Oriente (BO), short-rotation eucalyptus plantations had been cultivated for 4.0; 13.0, 22.0, 32.0 and 34.0 years, at a lower elevation and in a warmer climate, while in Virginópolis (VG), these time periods were 8.0, 19.0 and 33.0 years, at a higher elevation and in a milder climate. Soil samples were collected from the 0-20 cm layer to estimate C stocks. Results indicate that the C stocks simulated by the Century model decreased after 37 years of poorly managed pastures in areas previously covered by native forest in the regions of BO and VG. The substitution of poorly managed pastures by eucalyptus in the early 1970´s led to an average increase of C of 0.28 and 0.42 t ha-1 year-1 in BO and VG, respectively. The measured C stocks under eucalyptus in distinct soil Orders and independent regions with variable edapho-climate conditions were not far from the values estimated by the Century model (root mean square error - RMSE = 20.9; model efficiency - EF = 0.29) despite the opposite result obtained with the statistical procedure to test the identity of analytical methods. Only for lower soil C stocks, the model over-estimated the C stock in the 0-20 cm layer. Thus, the Century model is highly promising to detect changes in C stocks in distinct soil orders under eucalyptus, as well as to indicate the impact of harvest residue management on SOM in future rotations.