898 resultados para Discrete Regression and Qualitative Choice Models
Resumo:
STRIPPING is a software application developed for the automatic design of a randomly packing column where the transfer of volatile organic compounds (VOCs) from water to air can be performed and to simulate it’s behaviour in a steady-state. This software completely purges any need of experimental work for the selection of diameter of the column, and allows a choice, a priori, of the most convenient hydraulic regime for this type of operation. It also allows the operator to choose the model used for the calculation of some parameters, namely between the Eckert/Robbins model and the Billet model for estimating the pressure drop of the gaseous phase, and between the Billet and Onda/Djebbar’s models for the mass transfer. Illustrations of the graphical interface offered are presented.
Resumo:
In Portugal, feminine activity rate of working mother is high but remains structural asymmetries of responsibilities between women and men in familiar spheres. Based on quantitative and qualitative data results are presented that show that, in spite of a global feminization rate of 58,6%, women workers in State Administration remains with major responsibilities in familiar/private lives than men. Women in technical and leadership functions have the same patterns of familiar and domestic responsibilities but different patterns of work-time. Women in technical functions tend to have a strategy of work-family time balance, despite less career opportunities, while women in leadership functions adopt a supremacy of wok-time, just as men. Nevertheless, both women, in technical and leadership functions, feel a permanent conflict between career and family responsibilities, which is not felt by men. Gender roles define dominant models of work and family organisation which conduct to different professional strategies and career opportunities.
Resumo:
Dissertação apresentada para obtenção do Grau de Doutor em Engenharia Electrotécnica e de Computadores – Sistemas Digitais e Percepcionais pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
apresentado ao Instituto de Contabilidade e Administração do Porto para a Dissertação de Mestrado para obtenção do grau de Mestre em Contabilidade e Finanças sob orientação do Mestre Adalmiro Álvaro Malheiro de Castro Andrade Pereira
Resumo:
As cooperativas, enquanto entidades pertencentes ao setor da economia social, são organizações com características próprias e distintas das sociedades comerciais, destacando-se o seu escopo mutualístico e o caráter variável do seu capital social, por confronto com o escopo lucrativo e o princípio da conservação do capital social que caraterizam as sociedades. Estas especificidades das cooperativas condicionam a obtenção de meios de financiamento por parte destas. Em virtude do seu caráter variável, o capital social não representa uma garantia para os credores, pelo que serão as reservas, designadamente as reservas obrigatórias, que se apresentarão como o recurso financeiro de melhor qualidade na cooperativa. Nas cooperativas são identificáveis outros instrumentos financeiros, tais como: títulos de investimento e obrigações, os resultados provenientes das operações com terceiros, que são obrigatoriamente afetados a reservas irrepartíveis, os excedentes, os membros investidores, subsídios e benefícios fiscais. Para além da identificação das principais fontes de financiamento, foi ainda objeto de estudo repensar os instrumentos existentes e, eventualmente, a criação de novos instrumentos de financiamento nas cooperativas. Em termos metodológicos, a opção recaiu na conjugação de dois métodos: quantitativo e qualitativo. A técnica da investigação quantitativa selecionada para a recolha de dados, foi a base de dados, e para a investigação qualitativa as técnicas escolhidas foram a análise de conteúdo, a entrevista e o registo áudio. Os resultados da investigação confirmam a indispensabilidade de criação de novos instrumentos de financiamento para as cooperativas. Constatou-se a necessidade de modelos de financiamento que se adaptem à finalidade mutualista da cooperativa. Demonstrou-se que a principal fonte de financiamento são os recursos internos, sob forma de reservas.
Resumo:
Economics is a social science which, therefore, focuses on people and on the decisions they make, be it in an individual context, or in group situations. It studies human choices, in face of needs to be fulfilled, and a limited amount of resources to fulfill them. For a long time, there was a convergence between the normative and positive views of human behavior, in that the ideal and predicted decisions of agents in economic models were entangled in one single concept. That is, it was assumed that the best that could be done in each situation was exactly the choice that would prevail. Or, at least, that the facts that economics needed to explain could be understood in the light of models in which individual agents act as if they are able to make ideal decisions. However, in the last decades, the complexity of the environment in which economic decisions are made and the limits on the ability of agents to deal with it have been recognized, and incorporated into models of decision making in what came to be known as the bounded rationality paradigm. This was triggered by the incapacity of the unboundedly rationality paradigm to explain observed phenomena and behavior. This thesis contributes to the literature in three different ways. Chapter 1 is a survey on bounded rationality, which gathers and organizes the contributions to the field since Simon (1955) first recognized the necessity to account for the limits on human rationality. The focus of the survey is on theoretical work rather than the experimental literature which presents evidence of actual behavior that differs from what classic rationality predicts. The general framework is as follows. Given a set of exogenous variables, the economic agent needs to choose an element from the choice set that is avail- able to him, in order to optimize the expected value of an objective function (assuming his preferences are representable by such a function). If this problem is too complex for the agent to deal with, one or more of its elements is simplified. Each bounded rationality theory is categorized according to the most relevant element it simplifes. Chapter 2 proposes a novel theory of bounded rationality. Much in the same fashion as Conlisk (1980) and Gabaix (2014), we assume that thinking is costly in the sense that agents have to pay a cost for performing mental operations. In our model, if they choose not to think, such cost is avoided, but they are left with a single alternative, labeled the default choice. We exemplify the idea with a very simple model of consumer choice and identify the concept of isofin curves, i.e., sets of default choices which generate the same utility net of thinking cost. Then, we apply the idea to a linear symmetric Cournot duopoly, in which the default choice can be interpreted as the most natural quantity to be produced in the market. We find that, as the thinking cost increases, the number of firms thinking in equilibrium decreases. More interestingly, for intermediate levels of thinking cost, an equilibrium in which one of the firms chooses the default quantity and the other best responds to it exists, generating asymmetric choices in a symmetric model. Our model is able to explain well-known regularities identified in the Cournot experimental literature, such as the adoption of different strategies by players (Huck et al. , 1999), the inter temporal rigidity of choices (Bosch-Dom enech & Vriend, 2003) and the dispersion of quantities in the context of di cult decision making (Bosch-Dom enech & Vriend, 2003). Chapter 3 applies a model of bounded rationality in a game-theoretic set- ting to the well-known turnout paradox in large elections, pivotal probabilities vanish very quickly and no one should vote, in sharp contrast with the ob- served high levels of turnout. Inspired by the concept of rhizomatic thinking, introduced by Bravo-Furtado & Côrte-Real (2009a), we assume that each per- son is self-delusional in the sense that, when making a decision, she believes that a fraction of the people who support the same party decides alike, even if no communication is established between them. This kind of belief simplifies the decision of the agent, as it reduces the number of players he believes to be playing against { it is thus a bounded rationality approach. Studying a two-party first-past-the-post election with a continuum of self-delusional agents, we show that the turnout rate is positive in all the possible equilibria, and that it can be as high as 100%. The game displays multiple equilibria, at least one of which entails a victory of the bigger party. The smaller one may also win, provided its relative size is not too small; more self-delusional voters in the minority party decreases this threshold size. Our model is able to explain some empirical facts, such as the possibility that a close election leads to low turnout (Geys, 2006), a lower margin of victory when turnout is higher (Geys, 2006) and high turnout rates favoring the minority (Bernhagen & Marsh, 1997).
Resumo:
PhD thesis in Educational Sciences (specialization in Politics of Education).
Resumo:
OBJECTIVE: To investigate preoperative predictive factors of severe perioperative intercurrent events and in-hospital mortality in coronary artery bypass graft (CABG) surgery and to develop specific models of risk prediction for these events, mainly those that can undergo changes in the preoperative period. METHODS: We prospectively studied 453 patients who had undergone CABG. Factors independently associated with the events of interest were determined with multiple logistic regression and Cox proportional hazards regression model. RESULTS: The mortality rate was 11.3% (51/453), and 21.2% of the patients had 1 or more perioperative intercurrent events. In the final model, the following variables remained associated with the risk of intercurrent events: age ³ 70 years, female sex, hospitalization via SUS (Sistema Único de Saúde - the Brazilian public health system), cardiogenic shock, ischemia, and dependence on dialysis. Using multiple logistic regression for in-hospital mortality, the following variables participated in the model of risk prediction: age ³ 70 years, female sex, hospitalization via SUS, diabetes, renal dysfunction, and cardiogenic shock. According to the Cox regression model for death within the 7 days following surgery, the following variables remained associated with mortality: age ³ 70 years, female sex, cardiogenic shock, and hospitalization via SUS. CONCLUSION: The aspects linked to the structure of the Brazilian health system, such as factors of great impact on the results obtained, indicate that the events investigated also depend on factors that do not relate to the patient's intrinsic condition.
Validation of the Killip-Kimball Classification and Late Mortality after Acute Myocardial Infarction
Resumo:
Background: The classification or index of heart failure severity in patients with acute myocardial infarction (AMI) was proposed by Killip and Kimball aiming at assessing the risk of in-hospital death and the potential benefit of specific management of care provided in Coronary Care Units (CCU) during the decade of 60. Objective: To validate the risk stratification of Killip classification in the long-term mortality and compare the prognostic value in patients with non-ST-segment elevation MI (NSTEMI) relative to patients with ST-segment elevation MI (STEMI), in the era of reperfusion and modern antithrombotic therapies. Methods: We evaluated 1906 patients with documented AMI and admitted to the CCU, from 1995 to 2011, with a mean follow-up of 05 years to assess total mortality. Kaplan-Meier (KM) curves were developed for comparison between survival distributions according to Killip class and NSTEMI versus STEMI. Cox proportional regression models were developed to determine the independent association between Killip class and mortality, with sensitivity analyses based on type of AMI. Results: The proportions of deaths and the KM survival distributions were significantly different across Killip class >1 (p <0.001) and with a similar pattern between patients with NSTEMI and STEMI. Cox models identified the Killip classification as a significant, sustained, consistent predictor and independent of relevant covariables (Wald χ2 16.5 [p = 0.001], NSTEMI) and (Wald χ2 11.9 [p = 0.008], STEMI). Conclusion: The Killip and Kimball classification performs relevant prognostic role in mortality at mean follow-up of 05 years post-AMI, with a similar pattern between NSTEMI and STEMI patients.
Resumo:
The algorithmic approach to data modelling has developed rapidly these last years, in particular methods based on data mining and machine learning have been used in a growing number of applications. These methods follow a data-driven methodology, aiming at providing the best possible generalization and predictive abilities instead of concentrating on the properties of the data model. One of the most successful groups of such methods is known as Support Vector algorithms. Following the fruitful developments in applying Support Vector algorithms to spatial data, this paper introduces a new extension of the traditional support vector regression (SVR) algorithm. This extension allows for the simultaneous modelling of environmental data at several spatial scales. The joint influence of environmental processes presenting different patterns at different scales is here learned automatically from data, providing the optimum mixture of short and large-scale models. The method is adaptive to the spatial scale of the data. With this advantage, it can provide efficient means to model local anomalies that may typically arise in situations at an early phase of an environmental emergency. However, the proposed approach still requires some prior knowledge on the possible existence of such short-scale patterns. This is a possible limitation of the method for its implementation in early warning systems. The purpose of this paper is to present the multi-scale SVR model and to illustrate its use with an application to the mapping of Cs137 activity given the measurements taken in the region of Briansk following the Chernobyl accident.
Resumo:
This article sets out a theoretical framework for the study of organisational change within political alliances. To achieve this objective it uses as a starting point a series of premises, the most notable of which include the definition of organisational change as a discrete, complex and focussed phenomenon of changes in power within the party. In accordance with these premises, it analyses the synthetic model of organisational change proposed by Panebianco (1988). After examining its limitations, a number of amendments are proposed to adapt it to the way political alliances operate. The above has resulted in the design of four new models. In order to test its validity and explanatory power in a preliminary manner, the second part looks at the organisational change of the UDC within the CiU alliance between 1978 and 2001. The discussion and conclusions reached demonstrate the problems of determinism of the Panebianco model and suggest, tentatively, the importance of the power balance within the alliance as a key factor.
Resumo:
We present a stylized intertemporal forward-looking model able that accommodates key regional economic features, an area where the literature is not well developed. The main difference, from the standard applications, is the role of saving and its implication for the balance of payments. Though maintaining dynamic forward-looking behaviour for agents, the rate of private saving is exogenously determined and so no neoclassical financial adjustment is needed. Also, we focus on the similarities and the differences between myopic and forward-looking models, highlighting the divergences among the main adjustment equations and the resulting simulation outcomes.
Resumo:
This paper surveys the literature on strategy-proofness from a historical perspective. While I discuss the connections with other works on incentives in mechanism design, the main emphasis is on social choice models. This article has been prepared for the Handbook of Social Choice and Welfare, Volume 2, Edited by K. Arrow, A. Sen and K. Suzumura
Resumo:
The dynamical analysis of large biological regulatory networks requires the development of scalable methods for mathematical modeling. Following the approach initially introduced by Thomas, we formalize the interactions between the components of a network in terms of discrete variables, functions, and parameters. Model simulations result in directed graphs, called state transition graphs. We are particularly interested in reachability properties and asymptotic behaviors, which correspond to terminal strongly connected components (or "attractors") in the state transition graph. A well-known problem is the exponential increase of the size of state transition graphs with the number of network components, in particular when using the biologically realistic asynchronous updating assumption. To address this problem, we have developed several complementary methods enabling the analysis of the behavior of large and complex logical models: (i) the definition of transition priority classes to simplify the dynamics; (ii) a model reduction method preserving essential dynamical properties, (iii) a novel algorithm to compact state transition graphs and directly generate compressed representations, emphasizing relevant transient and asymptotic dynamical properties. The power of an approach combining these different methods is demonstrated by applying them to a recent multilevel logical model for the network controlling CD4+ T helper cell response to antigen presentation and to a dozen cytokines. This model accounts for the differentiation of canonical Th1 and Th2 lymphocytes, as well as of inflammatory Th17 and regulatory T cells, along with many hybrid subtypes. All these methods have been implemented into the software GINsim, which enables the definition, the analysis, and the simulation of logical regulatory graphs.
Resumo:
Network airlines have been increasingly focusing their operations on hub airports through the exploitation of connecting traffic, allowing them to take advantage of economies of traffic density, which are unequivocal in the airline industry. Less attention has been devoted to airlines? decisions on point-to-point thin routes, which could be served using different aircraft technologies and different business models. This paper examines, both theoretically and empirically, the impact on airlines ?networks of the two major innovations in the airline industry in the last two decades: the regional jet technology and the low-cost business model. We show that, under certain circumstances, direct services on point-to-point thin routes can be viable and thus airlines may be interested in deviating passengers out of the hub.