988 resultados para Gaussian assumption
Resumo:
With ever tightening budgets and limitations of demolition equipment, states are looking for cost-effective, reliable, and sustainable methods for removing concrete decks from bridges. The goal of this research was to explore such methods. The research team conducted qualitative studies through a literature review, interviews, surveys, and workshops and performed small-scale trials and push-out tests (shear strength evaluations). Interviews with bridge owners and contractors indicated that concrete deck replacement was more economical than replacing an entire superstructure under the assumption that the salvaged superstructure has adequate remaining service life and capacity. Surveys and workshops provided insight into advantages and disadvantages of deck removal methods, information that was used to guide testing. Small-scale trials explored three promising deck removal methods: hydrodemolition, chemical splitting, and peeling
Resumo:
Computational anatomy with magnetic resonance imaging (MRI) is well established as a noninvasive biomarker of Alzheimer's disease (AD); however, there is less certainty about its dependency on the staging of AD. We use classical group analyses and automated machine learning classification of standard structural MRI scans to investigate AD diagnostic accuracy from the preclinical phase to clinical dementia. Longitudinal data from the Alzheimer's Disease Neuroimaging Initiative were stratified into 4 groups according to the clinical status-(1) AD patients; (2) mild cognitive impairment (MCI) converters; (3) MCI nonconverters; and (4) healthy controls-and submitted to a support vector machine. The obtained classifier was significantly above the chance level (62%) for detecting AD already 4 years before conversion from MCI. Voxel-based univariate tests confirmed the plausibility of our findings detecting a distributed network of hippocampal-temporoparietal atrophy in AD patients. We also identified a subgroup of control subjects with brain structure and cognitive changes highly similar to those observed in AD. Our results indicate that computational anatomy can detect AD substantially earlier than suggested by current models. The demonstrated differential spatial pattern of atrophy between correctly and incorrectly classified AD patients challenges the assumption of a uniform pathophysiological process underlying clinically identified AD.
Resumo:
The objective of this paper is to distinguish between different types of working poverty, on the basis of the mechanisms that produce it. Whereas the poverty literature identifies a myriad of risk factors and of categories of disadvantaged workers, we focus on three immediate causes of working poverty, namely low wage rate, weak labour force attachment, and high needs, the latter mainly due to the presence of children (and sometimes to the increase in needs caused by a divorce). These three mechanisms are the channels through which macroeconomic, demographic and policy factors have a direct bearing on working households. The main assumption tested here is that welfare regimes strongly influence the relative weight of these three mechanisms in producing working poverty, and, hence, the composition of the working-poor population. Our figures confirm this hypothesis and show that low-wage employment is a key factor, but, by far, not the only one and that family policies broadly understood play a decisive role, as well as patterns of labour market participation and integration.
Resumo:
Background: Conventional magnetic resonance imaging (MRI) techniques are highly sensitive to detect multiple sclerosis (MS) plaques, enabling a quantitative assessment of inflammatory activity and lesion load. In quantitative analyses of focal lesions, manual or semi-automated segmentations have been widely used to compute the total number of lesions and the total lesion volume. These techniques, however, are both challenging and time-consuming, being also prone to intra-observer and inter-observer variability.Aim: To develop an automated approach to segment brain tissues and MS lesions from brain MRI images. The goal is to reduce the user interaction and to provide an objective tool that eliminates the inter- and intra-observer variability.Methods: Based on the recent methods developed by Souplet et al. and de Boer et al., we propose a novel pipeline which includes the following steps: bias correction, skull stripping, atlas registration, tissue classification, and lesion segmentation. After the initial pre-processing steps, a MRI scan is automatically segmented into 4 classes: white matter (WM), grey matter (GM), cerebrospinal fluid (CSF) and partial volume. An expectation maximisation method which fits a multivariate Gaussian mixture model to T1-w, T2-w and PD-w images is used for this purpose. Based on the obtained tissue masks and using the estimated GM mean and variance, we apply an intensity threshold to the FLAIR image, which provides the lesion segmentation. With the aim of improving this initial result, spatial information coming from the neighbouring tissue labels is used to refine the final lesion segmentation.Results:The experimental evaluation was performed using real data sets of 1.5T and the corresponding ground truth annotations provided by expert radiologists. The following values were obtained: 64% of true positive (TP) fraction, 80% of false positive (FP) fraction, and an average surface distance of 7.89 mm. The results of our approach were quantitatively compared to our implementations of the works of Souplet et al. and de Boer et al., obtaining higher TP and lower FP values.Conclusion: Promising MS lesion segmentation results have been obtained in terms of TP. However, the high number of FP which is still a well-known problem of all the automated MS lesion segmentation approaches has to be improved in order to use them for the standard clinical practice. Our future work will focus on tackling this issue.
Resumo:
Excitation-continuous music instrument control patterns are often not explicitly represented in current sound synthesis techniques when applied to automatic performance. Both physical model-based and sample-based synthesis paradigmswould benefit from a flexible and accurate instrument control model, enabling the improvement of naturalness and realism. Wepresent a framework for modeling bowing control parameters inviolin performance. Nearly non-intrusive sensing techniques allow for accurate acquisition of relevant timbre-related bowing control parameter signals.We model the temporal contour of bow velocity, bow pressing force, and bow-bridge distance as sequences of short Bézier cubic curve segments. Considering different articulations, dynamics, and performance contexts, a number of note classes are defined. Contours of bowing parameters in a performance database are analyzed at note-level by following a predefined grammar that dictates characteristics of curve segment sequences for each of the classes in consideration. As a result, contour analysis of bowing parameters of each note yields an optimal representation vector that is sufficient for reconstructing original contours with significant fidelity. From the resulting representation vectors, we construct a statistical model based on Gaussian mixtures suitable for both the analysis and synthesis of bowing parameter contours. By using the estimated models, synthetic contours can be generated through a bow planning algorithm able to reproduce possible constraints caused by the finite length of the bow. Rendered contours are successfully used in two preliminary synthesis frameworks: digital waveguide-based bowed stringphysical modeling and sample-based spectral-domain synthesis.
Resumo:
In my thesis I present the findings of a multiple-case study on the CSR approach of three multinational companies, applying Basu and Palazzo's (2008) CSR-character as a process model of sensemaking, Suchman's (1995) framework on legitimation strategies, and Habermas (1996) concept of deliberative democracy. The theoretical framework is based on the assumption of a postnational constellation (Habermas, 2001) which sends multinational companies onto a process of sensemaking (Weick, 1995) with regards to their responsibilities in a globalizing world. The major reason is that mainstream CSR-concepts are based on the assumption of a liberal market economy embedded in a nation state that do not fit the changing conditions for legitimation of corporate behavior in a globalizing world. For the purpose of this study, I primarily looked at two research questions: (i) How can the CSR approach of a multinational corporation be systematized empirically? (ii) What is the impact of the changing conditions in the postnational constellation on the CSR approach of the studied multinational corporations? For the analysis, I adopted a holistic approach (Patton, 1980), combining elements of a deductive and inductive theory building methodology (Eisenhardt, 1989b; Eisenhardt & Graebner, 2007; Glaser & Strauss, 1967; Van de Ven, 1992) and rigorous qualitative data analysis. Primary data was collected through 90 semi-structured interviews in two rounds with executives and managers in three multinational companies and their respective stakeholders. Raw data originating from interview tapes, field notes, and contact sheets was processed, stored, and managed using the software program QSR NVIVO 7. In the analysis, I applied qualitative methods to strengthen the interpretative part as well as quantitative methods to identify dominating dimensions and patterns. I found three different coping behaviors that provide insights into the corporate mindset. The results suggest that multinational corporations increasingly turn towards relational approaches of CSR to achieve moral legitimacy in formalized dialogical exchanges with their stakeholders since legitimacy can no longer be derived only from a national framework. I also looked at the degree to which they have reacted to the postnational constellation by the assumption of former state duties and the underlying reasoning. The findings indicate that CSR approaches become increasingly comprehensive through integrating political strategies that reflect the growing (self-) perception of multinational companies as political actors. Based on the results, I developed a model which relates the different dimensions of corporate responsibility to the discussion on deliberative democracy, global governance and social innovation to provide guidance for multinational companies in a postnational world. With my thesis, I contribute to management research by (i) delivering a comprehensive critique of the mainstream CSR-literature and (ii) filling the gap of thorough qualitative research on CSR in a globalizing world using the CSR-character as an empirical device, and (iii) to organizational studies by further advancing a deliberative view of the firm proposed by Scherer and Palazzo (2008).
Resumo:
Introduction This dissertation consists of three essays in equilibrium asset pricing. The first chapter studies the asset pricing implications of a general equilibrium model in which real investment is reversible at a cost. Firms face higher costs in contracting than in expanding their capital stock and decide to invest when their productive capital is scarce relative to the overall capital of the economy. Positive shocks to the capital of the firm increase the size of the firm and reduce the value of growth options. As a result, the firm is burdened with more unproductive capital and its value lowers with respect to the accumulated capital. The optimal consumption policy alters the optimal allocation of resources and affects firm's value, generating mean-reverting dynamics for the M/B ratios. The model (1) captures convergence of price-to-book ratios -negative for growth stocks and positive for value stocks - (firm migration), (2) generates deviations from the classic CAPM in line with the cross-sectional variation in expected stock returns and (3) generates a non-monotone relationship between Tobin's q and conditional volatility consistent with the empirical evidence. The second chapter proposes a standard portfolio-choice problem with transaction costs and mean reversion in expected returns. In the presence of transactions costs, no matter how small, arbitrage activity does not necessarily render equal all riskless rates of return. When two such rates follow stochastic processes, it is not optimal immediately to arbitrage out any discrepancy that arises between them. The reason is that immediate arbitrage would induce a definite expenditure of transactions costs whereas, without arbitrage intervention, there exists some, perhaps sufficient, probability that these two interest rates will come back together without any costs having been incurred. Hence, one can surmise that at equilibrium the financial market will permit the coexistence of two riskless rates that are not equal to each other. For analogous reasons, randomly fluctuating expected rates of return on risky assets will be allowed to differ even after correction for risk, leading to important violations of the Capital Asset Pricing Model. The combination of randomness in expected rates of return and proportional transactions costs is a serious blow to existing frictionless pricing models. Finally, in the last chapter I propose a two-countries two-goods general equilibrium economy with uncertainty about the fundamentals' growth rates to study the joint behavior of equity volatilities and correlation at the business cycle frequency. I assume that dividend growth rates jump from one state to other, while countries' switches are possibly correlated. The model is solved in closed-form and the analytical expressions for stock prices are reported. When calibrated to the empirical data of United States and United Kingdom, the results show that, given the existing degree of synchronization across these business cycles, the model captures quite well the historical patterns of stock return volatilities. Moreover, I can explain the time behavior of the correlation, but exclusively under the assumption of a global business cycle.
Resumo:
Dans cette thèse, nous étudions les aspects comportementaux d'agents qui interagissent dans des systèmes de files d'attente à l'aide de modèles de simulation et de méthodologies expérimentales. Chaque période les clients doivent choisir un prestataire de servivce. L'objectif est d'analyser l'impact des décisions des clients et des prestataires sur la formation des files d'attente. Dans un premier cas nous considérons des clients ayant un certain degré d'aversion au risque. Sur la base de leur perception de l'attente moyenne et de la variabilité de cette attente, ils forment une estimation de la limite supérieure de l'attente chez chacun des prestataires. Chaque période, ils choisissent le prestataire pour lequel cette estimation est la plus basse. Nos résultats indiquent qu'il n'y a pas de relation monotone entre le degré d'aversion au risque et la performance globale. En effet, une population de clients ayant un degré d'aversion au risque intermédiaire encoure généralement une attente moyenne plus élevée qu'une population d'agents indifférents au risque ou très averses au risque. Ensuite, nous incorporons les décisions des prestataires en leur permettant d'ajuster leur capacité de service sur la base de leur perception de la fréquence moyenne d'arrivées. Les résultats montrent que le comportement des clients et les décisions des prestataires présentent une forte "dépendance au sentier". En outre, nous montrons que les décisions des prestataires font converger l'attente moyenne pondérée vers l'attente de référence du marché. Finalement, une expérience de laboratoire dans laquelle des sujets jouent le rôle de prestataire de service nous a permis de conclure que les délais d'installation et de démantèlement de capacité affectent de manière significative la performance et les décisions des sujets. En particulier, les décisions du prestataire, sont influencées par ses commandes en carnet, sa capacité de service actuellement disponible et les décisions d'ajustement de capacité qu'il a prises, mais pas encore implémentées. - Queuing is a fact of life that we witness daily. We all have had the experience of waiting in line for some reason and we also know that it is an annoying situation. As the adage says "time is money"; this is perhaps the best way of stating what queuing problems mean for customers. Human beings are not very tolerant, but they are even less so when having to wait in line for service. Banks, roads, post offices and restaurants are just some examples where people must wait for service. Studies of queuing phenomena have typically addressed the optimisation of performance measures (e.g. average waiting time, queue length and server utilisation rates) and the analysis of equilibrium solutions. The individual behaviour of the agents involved in queueing systems and their decision making process have received little attention. Although this work has been useful to improve the efficiency of many queueing systems, or to design new processes in social and physical systems, it has only provided us with a limited ability to explain the behaviour observed in many real queues. In this dissertation we differ from this traditional research by analysing how the agents involved in the system make decisions instead of focusing on optimising performance measures or analysing an equilibrium solution. This dissertation builds on and extends the framework proposed by van Ackere and Larsen (2004) and van Ackere et al. (2010). We focus on studying behavioural aspects in queueing systems and incorporate this still underdeveloped framework into the operations management field. In the first chapter of this thesis we provide a general introduction to the area, as well as an overview of the results. In Chapters 2 and 3, we use Cellular Automata (CA) to model service systems where captive interacting customers must decide each period which facility to join for service. They base this decision on their expectations of sojourn times. Each period, customers use new information (their most recent experience and that of their best performing neighbour) to form expectations of sojourn time at the different facilities. Customers update their expectations using an adaptive expectations process to combine their memory and their new information. We label "conservative" those customers who give more weight to their memory than to the xiv Summary new information. In contrast, when they give more weight to new information, we call them "reactive". In Chapter 2, we consider customers with different degree of risk-aversion who take into account uncertainty. They choose which facility to join based on an estimated upper-bound of the sojourn time which they compute using their perceptions of the average sojourn time and the level of uncertainty. We assume the same exogenous service capacity for all facilities, which remains constant throughout. We first analyse the collective behaviour generated by the customers' decisions. We show that the system achieves low weighted average sojourn times when the collective behaviour results in neighbourhoods of customers loyal to a facility and the customers are approximately equally split among all facilities. The lowest weighted average sojourn time is achieved when exactly the same number of customers patronises each facility, implying that they do not wish to switch facility. In this case, the system has achieved the Nash equilibrium. We show that there is a non-monotonic relationship between the degree of risk-aversion and system performance. Customers with an intermediate degree of riskaversion typically achieve higher sojourn times; in particular they rarely achieve the Nash equilibrium. Risk-neutral customers have the highest probability of achieving the Nash Equilibrium. Chapter 3 considers a service system similar to the previous one but with risk-neutral customers, and relaxes the assumption of exogenous service rates. In this sense, we model a queueing system with endogenous service rates by enabling managers to adjust the service capacity of the facilities. We assume that managers do so based on their perceptions of the arrival rates and use the same principle of adaptive expectations to model these perceptions. We consider service systems in which the managers' decisions take time to be implemented. Managers are characterised by a profile which is determined by the speed at which they update their perceptions, the speed at which they take decisions, and how coherent they are when accounting for their previous decisions still to be implemented when taking their next decision. We find that the managers' decisions exhibit a strong path-dependence: owing to the initial conditions of the model, the facilities of managers with identical profiles can evolve completely differently. In some cases the system becomes "locked-in" into a monopoly or duopoly situation. The competition between managers causes the weighted average sojourn time of the system to converge to the exogenous benchmark value which they use to estimate their desired capacity. Concerning the managers' profile, we found that the more conservative Summary xv a manager is regarding new information, the larger the market share his facility achieves. Additionally, the faster he takes decisions, the higher the probability that he achieves a monopoly position. In Chapter 4 we consider a one-server queueing system with non-captive customers. We carry out an experiment aimed at analysing the way human subjects, taking on the role of the manager, take decisions in a laboratory regarding the capacity of a service facility. We adapt the model proposed by van Ackere et al (2010). This model relaxes the assumption of a captive market and allows current customers to decide whether or not to use the facility. Additionally the facility also has potential customers who currently do not patronise it, but might consider doing so in the future. We identify three groups of subjects whose decisions cause similar behavioural patterns. These groups are labelled: gradual investors, lumpy investors, and random investor. Using an autocorrelation analysis of the subjects' decisions, we illustrate that these decisions are positively correlated to the decisions taken one period early. Subsequently we formulate a heuristic to model the decision rule considered by subjects in the laboratory. We found that this decision rule fits very well for those subjects who gradually adjust capacity, but it does not capture the behaviour of the subjects of the other two groups. In Chapter 5 we summarise the results and provide suggestions for further work. Our main contribution is the use of simulation and experimental methodologies to explain the collective behaviour generated by customers' and managers' decisions in queueing systems as well as the analysis of the individual behaviour of these agents. In this way, we differ from the typical literature related to queueing systems which focuses on optimising performance measures and the analysis of equilibrium solutions. Our work can be seen as a first step towards understanding the interaction between customer behaviour and the capacity adjustment process in queueing systems. This framework is still in its early stages and accordingly there is a large potential for further work that spans several research topics. Interesting extensions to this work include incorporating other characteristics of queueing systems which affect the customers' experience (e.g. balking, reneging and jockeying); providing customers and managers with additional information to take their decisions (e.g. service price, quality, customers' profile); analysing different decision rules and studying other characteristics which determine the profile of customers and managers.
Resumo:
En aquest text es presenta una lectura de Rousseau des de la seva dimensió política i sota una perspectiva d’actualitat. Amb una aproximació a la concepció i l’argumentació dels elements que conformen el projecte polític en l’obra El contracte social —això és, justícia, moral i raó— les autores pensen a fons la modernitat des de l’assumpció del convencionalisme cívic liberal. L’espai públic és, doncs, l’eix que permet desgranar els arguments que condicionen un ésser humà que es desplega molt més enllà de la naturalesa i que donen sentit a una educació conscient d’una de les seves finalitats intrínseques més rellevants.
Resumo:
This paper proposes a very fast method for blindly approximating a nonlinear mapping which transforms a sum of random variables. The estimation is surprisingly good even when the basic assumption is not satisfied.We use the method for providing a good initialization for inverting post-nonlinear mixtures and Wiener systems. Experiments show that the algorithm speed is strongly improved and the asymptotic performance is preserved with a very low extra computational cost.
Resumo:
Although sources in general nonlinear mixturm arc not separable iising only statistical independence, a special and realistic case of nonlinear mixtnres, the post nonlinear (PNL) mixture is separable choosing a suited separating system. Then, a natural approach is based on the estimation of tho separating Bystem parameters by minimizing an indcpendence criterion, like estimated mwce mutual information. This class of methods requires higher (than 2) order statistics, and cannot separate Gaarsian sources. However, use of [weak) prior, like source temporal correlation or nonstationarity, leads to other source separation Jgw rithms, which are able to separate Gaussian sourra, and can even, for a few of them, works with second-order statistics. Recently, modeling time correlated s011rces by Markov models, we propose vcry efficient algorithms hmed on minimization of the conditional mutual information. Currently, using the prior of temporally correlated sources, we investigate the fesihility of inverting PNL mixtures with non-bijectiw non-liacarities, like quadratic functions. In this paper, we review the main ICA and BSS results for riunlinear mixtures, present PNL models and algorithms, and finish with advanced resutts using temporally correlated snu~sm
Resumo:
This paper proposes a very fast method for blindly initial- izing a nonlinear mapping which transforms a sum of random variables. The method provides a surprisingly good approximation even when the basic assumption is not fully satis¯ed. The method can been used success- fully for initializing nonlinearity in post-nonlinear mixtures or in Wiener system inversion, for improving algorithm speed and convergence.
Resumo:
In this paper we present a method for blind deconvolution of linear channels based on source separation techniques, for real word signals. This technique applied to blind deconvolution problems is based in exploiting not the spatial independence between signals but the temporal independence between samples of the signal. Our objective is to minimize the mutual information between samples of the output in order to retrieve the original signal. In order to make use of use this idea the input signal must be a non-Gaussian i.i.d. signal. Because most real world signals do not have this i.i.d. nature, we will need to preprocess the original signal before the transmission into the channel. Likewise we should assure that the transmitted signal has non-Gaussian statistics in order to achieve the correct function of the algorithm. The strategy used for this preprocessing will be presented in this paper. If the receiver has the inverse of the preprocess, the original signal can be reconstructed without the convolutive distortion.
Resumo:
We present a framework for modeling right-hand gestures in bowed-string instrument playing, applied to violin. Nearly non-intrusive sensing techniques allow for accurate acquisition of relevant timbre-related bowing gesture parameter cues. We model the temporal contour of bow transversal velocity, bow pressing force, and bow-bridge distance as sequences of short segments, in particular B´ezier cubic curve segments. Considering different articulations, dynamics, andcontexts, a number of note classes is defined. Gesture parameter contours of a performance database are analyzed at note-level by following a predefined grammar that dictatescharacteristics of curve segment sequences for each of the classes into consideration. Based on dynamic programming, gesture parameter contour analysis provides an optimal curve parameter vector for each note. The informationpresent in such parameter vector is enough for reconstructing original gesture parameter contours with significant fidelity. From the resulting representation vectors, weconstruct a statistical model based on Gaussian mixtures, suitable for both analysis and synthesis of bowing gesture parameter contours. We show the potential of the modelby synthesizing bowing gesture parameter contours from an annotated input score. Finally, we point out promising applicationsand developments.
Resumo:
We propose a deep study on tissue modelization andclassification Techniques on T1-weighted MR images. Threeapproaches have been taken into account to perform thisvalidation study. Two of them are based on FiniteGaussian Mixture (FGM) model. The first one consists onlyin pure gaussian distributions (FGM-EM). The second oneuses a different model for partial volume (PV) (FGM-GA).The third one is based on a Hidden Markov Random Field(HMRF) model. All methods have been tested on a DigitalBrain Phantom image considered as the ground truth. Noiseand intensity non-uniformities have been added tosimulate real image conditions. Also the effect of ananisotropic filter is considered. Results demonstratethat methods relying in both intensity and spatialinformation are in general more robust to noise andinhomogeneities. However, in some cases there is nosignificant differences between all presented methods.