926 resultados para Process Models
Resumo:
Soil erosion by water is a major driven force causing land degradation. Laboratory experiments, on-site field study, and suspended sediments measurements were major fundamental approaches to study the mechanisms of soil water erosion and to quantify the erosive losses during rain events. The experimental research faces the challenge to extent the result to a wider spatial scale. Soil water erosion modeling provides possible solutions for scaling problems in erosion research, and is of principal importance to better understanding the governing processes of water erosion. However, soil water erosion models were considered to have limited value in practice. Uncertainties in hydrological simulations are among the reasons that hindering the development of water erosion model. Hydrological models gained substantial improvement recently and several water erosion models took advantages of the improvement of hydrological models. It is crucial to know the impact of changes in hydrological processes modeling on soil erosion simulation.
This dissertation work first created an erosion modeling tool (GEOtopSed) that takes advantage of the comprehensive hydrological model (GEOtop). The newly created tool was then tested and evaluated at an experimental watershed. The GEOtopSed model showed its ability to estimate multi-year soil erosion rate with varied hydrological conditions. To investigate the impact of different hydrological representations on soil erosion simulation, a 11-year simulation experiment was conducted for six models with varied configurations. The results were compared at varied temporal and spatial scales to highlight the roles of hydrological feedbacks on erosion. Models with simplified hydrological representations showed agreement with GEOtopSed model on long temporal scale (longer than annual). This result led to an investigation for erosion simulation at different rainfall regimes to check whether models with different hydrological representations have agreement on the soil water erosion responses to the changing climate. Multi-year ensemble simulations with different extreme precipitation scenarios were conducted at seven climate regions. The differences in erosion simulation results showed the influences of hydrological feedbacks which cannot be seen by purely rainfall erosivity method.
Resumo:
The problem of social diffusion has animated sociological thinking on topics ranging from the spread of an idea, an innovation or a disease, to the foundations of collective behavior and political polarization. While network diffusion has been a productive metaphor, the reality of diffusion processes is often muddier. Ideas and innovations diffuse differently from diseases, but, with a few exceptions, the diffusion of ideas and innovations has been modeled under the same assumptions as the diffusion of disease. In this dissertation, I develop two new diffusion models for "socially meaningful" contagions that address two of the most significant problems with current diffusion models: (1) that contagions can only spread along observed ties, and (2) that contagions do not change as they spread between people. I augment insights from these statistical and simulation models with an analysis of an empirical case of diffusion - the use of enterprise collaboration software in a large technology company. I focus the empirical study on when people abandon innovations, a crucial, and understudied aspect of the diffusion of innovations. Using timestamped posts, I analyze when people abandon software to a high degree of detail.
To address the first problem, I suggest a latent space diffusion model. Rather than treating ties as stable conduits for information, the latent space diffusion model treats ties as random draws from an underlying social space, and simulates diffusion over the social space. Theoretically, the social space model integrates both actor ties and attributes simultaneously in a single social plane, while incorporating schemas into diffusion processes gives an explicit form to the reciprocal influences that cognition and social environment have on each other. Practically, the latent space diffusion model produces statistically consistent diffusion estimates where using the network alone does not, and the diffusion with schemas model shows that introducing some cognitive processing into diffusion processes changes the rate and ultimate distribution of the spreading information. To address the second problem, I suggest a diffusion model with schemas. Rather than treating information as though it is spread without changes, the schema diffusion model allows people to modify information they receive to fit an underlying mental model of the information before they pass the information to others. Combining the latent space models with a schema notion for actors improves our models for social diffusion both theoretically and practically.
The empirical case study focuses on how the changing value of an innovation, introduced by the innovations' network externalities, influences when people abandon the innovation. In it, I find that people are least likely to abandon an innovation when other people in their neighborhood currently use the software as well. The effect is particularly pronounced for supervisors' current use and number of supervisory team members who currently use the software. This case study not only points to an important process in the diffusion of innovation, but also suggests a new approach -- computerized collaboration systems -- to collecting and analyzing data on organizational processes.
Resumo:
Energy efficiency and user comfort have recently become priorities in the Facility Management (FM) sector. This has resulted in the use of innovative building components, such as thermal solar panels, heat pumps, etc., as they have potential to provide better performance, energy savings and increased user comfort. However, as the complexity of components increases, the requirement for maintenance management also increases. The standard routine for building maintenance is inspection which results in repairs or replacement when a fault is found. This routine leads to unnecessary inspections which have a cost with respect to downtime of a component and work hours. This research proposes an alternative routine: performing building maintenance at the point in time when the component is degrading and requires maintenance, thus reducing the frequency of unnecessary inspections. This thesis demonstrates that statistical techniques can be used as part of a maintenance management methodology to invoke maintenance before failure occurs. The proposed FM process is presented through a scenario utilising current Building Information Modelling (BIM) technology and innovative contractual and organisational models. This FM scenario supports a Degradation based Maintenance (DbM) scheduling methodology, implemented using two statistical techniques, Particle Filters (PFs) and Gaussian Processes (GPs). DbM consists of extracting and tracking a degradation metric for a component. Limits for the degradation metric are identified based on one of a number of proposed processes. These processes determine the limits based on the maturity of the historical information available. DbM is implemented for three case study components: a heat exchanger; a heat pump; and a set of bearings. The identified degradation points for each case study, from a PF, a GP and a hybrid (PF and GP combined) DbM implementation are assessed against known degradation points. The GP implementations are successful for all components. For the PF implementations, the results presented in this thesis find that the extracted metrics and limits identify degradation occurrences accurately for components which are in continuous operation. For components which have seasonal operational periods, the PF may wrongly identify degradation. The GP performs more robustly than the PF, but the PF, on average, results in fewer false positives. The hybrid implementations, which are a combination of GP and PF results, are successful for 2 of 3 case studies and are not affected by seasonal data. Overall, DbM is effectively applied for the three case study components. The accuracy of the implementations is dependant on the relationships modelled by the PF and GP, and on the type and quantity of data available. This novel maintenance process can improve equipment performance and reduce energy wastage from BSCs operation.
Resumo:
Multi-output Gaussian processes provide a convenient framework for multi-task problems. An illustrative and motivating example of a multi-task problem is multi-region electrophysiological time-series data, where experimentalists are interested in both power and phase coherence between channels. Recently, the spectral mixture (SM) kernel was proposed to model the spectral density of a single task in a Gaussian process framework. This work develops a novel covariance kernel for multiple outputs, called the cross-spectral mixture (CSM) kernel. This new, flexible kernel represents both the power and phase relationship between multiple observation channels. The expressive capabilities of the CSM kernel are demonstrated through implementation of 1) a Bayesian hidden Markov model, where the emission distribution is a multi-output Gaussian process with a CSM covariance kernel, and 2) a Gaussian process factor analysis model, where factor scores represent the utilization of cross-spectral neural circuits. Results are presented for measured multi-region electrophysiological data.
Resumo:
This review summarizes evidence of dysregulated reward circuitry function in a range of neurodevelopmental and psychiatric disorders and genetic syndromes. First, the contribution of identifying a core mechanistic process across disparate disorders to disease classification is discussed, followed by a review of the neurobiology of reward circuitry. We next consider preclinical animal models and clinical evidence of reward-pathway dysfunction in a range of disorders, including psychiatric disorders (i.e., substance-use disorders, affective disorders, eating disorders, and obsessive compulsive disorders), neurodevelopmental disorders (i.e., schizophrenia, attention-deficit/hyperactivity disorder, autism spectrum disorders, Tourette's syndrome, conduct disorder/oppositional defiant disorder), and genetic syndromes (i.e., Fragile X syndrome, Prader-Willi syndrome, Williams syndrome, Angelman syndrome, and Rett syndrome). We also provide brief overviews of effective psychopharmacologic agents that have an effect on the dopamine system in these disorders. This review concludes with methodological considerations for future research designed to more clearly probe reward-circuitry dysfunction, with the ultimate goal of improved intervention strategies.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
Software engineering researchers are challenged to provide increasingly more pow- erful levels of abstractions to address the rising complexity inherent in software solu- tions. One new development paradigm that places models as abstraction at the fore- front of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code. Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process. The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources. At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM’s synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise. This dissertation investigates how to decouple the DSK from the MoE and sub- sequently producing a generic model of execution (GMoE) from the remaining appli- cation logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis com- ponent of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions. This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.
Resumo:
Routes of migration and exchange are important factors in the debate about how the Neolithic transition spread into Europe. Studying the genetic diversity of livestock can help in tracing back some of these past events. Notably, domestic goat (Capra hircus) did not have any wild progenitors (Capra aegagrus) in Europe before their arrival from the Near East. Studies of mitochondrial DNA have shown that the diversity in European domesticated goats is a subset of that in the wild, underlining the ancestral relationship between both populations. Additionally, an ancient DNA study on Neolithic goat remains has indicated that a high level of genetic diversity was already present early in the Neolithic in northwestern Mediterranean sites. We used coalescent simulations and approximate Bayesian computation, conditioned on patterns of modern and ancient mitochondrial DNA diversity in domesticated and wild goats, to test a series of simplified models of the goat domestication process. Specifically, we ask if domestic goats descend from populations that were distinct prior to domestication. Although the models we present require further analyses, preliminary results indicate that wild and domestic goats are more likely to descend from a single ancestral wild population that was managed 11,500 years before present, and that serial founding events characterise the spread of Capra hircus into Europe.
Resumo:
Key Performance Indicators (KPIs) and their predictions are widely used by the enterprises for informed decision making. Nevertheless , a very important factor, which is generally overlooked, is that the top level strategic KPIs are actually driven by the operational level business processes. These two domains are, however, mostly segregated and analysed in silos with different Business Intelligence solutions. In this paper, we are proposing an approach for advanced Business Simulations, which converges the two domains by utilising process execution & business data, and concepts from Business Dynamics (BD) and Business Ontologies, to promote better system understanding and detailed KPI predictions. Our approach incorporates the automated creation of Causal Loop Diagrams, thus empowering the analyst to critically examine the complex dependencies hidden in the massive amounts of available enterprise data. We have further evaluated our proposed approach in the context of a retail use-case that involved verification of the automatically generated causal models by a domain expert.
Resumo:
Variations are inherent in all manufacturing processes and can significantly affect the quality of a final assembly, particularly in multistage assembly systems. Existing research in variation management has primarily focused on incorporating GD&T factors into variation propagation models in order to predict product quality and allocate tolerances. However, process induced variation, which has a key influence on process planning, has not been fully studied. Furthermore, the link between variation and cost has not been well established, in particular the effect that assembly process selection has on the final quality and cost of a product. To overcome these barriers, this paper proposes a novel method utilizing process capabilities to establish the relationship between variation and cost. The methodology is discussed using a real industrial case study. The benefits include determining the optimum configuration of an assembly system and facilitating rapid introduction of novel assembly techniques to achieve a competitive edge.
Resumo:
In recent years, depth cameras have been widely utilized in camera tracking for augmented and mixed reality. Many of the studies focus on the methods that generate the reference model simultaneously with the tracking and allow operation in unprepared environments. However, methods that rely on predefined CAD models have their advantages. In such methods, the measurement errors are not accumulated to the model, they are tolerant to inaccurate initialization, and the tracking is always performed directly in reference model's coordinate system. In this paper, we present a method for tracking a depth camera with existing CAD models and the Iterative Closest Point (ICP) algorithm. In our approach, we render the CAD model using the latest pose estimate and construct a point cloud from the corresponding depth map. We construct another point cloud from currently captured depth frame, and find the incremental change in the camera pose by aligning the point clouds. We utilize a GPGPU-based implementation of the ICP which efficiently uses all the depth data in the process. The method runs in real-time, it is robust for outliers, and it does not require any preprocessing of the CAD models. We evaluated the approach using the Kinect depth sensor, and compared the results to a 2D edge-based method, to a depth-based SLAM method, and to the ground truth. The results show that the approach is more stable compared to the edge-based method and it suffers less from drift compared to the depth-based SLAM.
Resumo:
Queueing theory is the mathematical study of ‘queue’ or ‘waiting lines’ where an item from inventory is provided to the customer on completion of service. A typical queueing system consists of a queue and a server. Customers arrive in the system from outside and join the queue in a certain way. The server picks up customers and serves them according to certain service discipline. Customers leave the system immediately after their service is completed. For queueing systems, queue length, waiting time and busy period are of primary interest to applications. The theory permits the derivation and calculation of several performance measures including the average waiting time in the queue or the system, mean queue length, traffic intensity, the expected number waiting or receiving service, mean busy period, distribution of queue length, and the probability of encountering the system in certain states, such as empty, full, having an available server or having to wait a certain time to be served.
Resumo:
This work presents a computational, called MOMENTS, code developed to be used in process control to determine a characteristic transfer function to industrial units when radiotracer techniques were been applied to study the unit´s performance. The methodology is based on the measuring the residence time distribution function (RTD) and calculate the first and second temporal moments of the tracer data obtained by two scintillators detectors NaI positioned to register a complete tracer movement inside the unit. Non linear regression technique has been used to fit various mathematical models and a statistical test was used to select the best result to the transfer function. Using the code MOMENTS, twelve different models can be used to fit a curve and calculate technical parameters to the unit.
Resumo:
This case study research reports on a small and medium-sized (SME) business-to-business (B2B) services firm implementing a novel new service development (NSD) process. It provides accounts of what occurred in practice in terms of the challenges to NSD process implementation and how the firm overcame these challenges. It also considers the implications for NSD in this and other firms’ innovation practices. This longitudinal case study (18 months) was conducted “inside” the case organization. It covered the entire innovation process from the initiation to the launch of a new service. The primary method may be viewed as participant observation. The research involved all those participating in the innovation system in the firm, including decision-makers, middle managers and employees at lower hierarchical levels and the firm’s external networks. Implications for researchers and managers focusing on structured innovation models for the services sector are also presented.
Resumo:
Objective Leadership is particularly important in complex highly interprofessional health care contexts involving a number of staff, some from the same specialty (intraprofessional), and others from different specialties (interprofessional). The authors recently published the concept of “The Burns Suite” (TBS) as a novel simulation tool to deliver interprofessional and teamwork training. It is unclear which leadership behaviors are the most important in an interprofessional burns resuscitation scenario, and whether they can be modeled on to current leadership theory. The purpose of this study was to perform a comprehensive video analysis of leadership behaviors within TBS. Methods A total of 3 burns resuscitation simulations within TBS were recorded. The video analysis was grounded-theory inspired. Using predefined criteria, actions/interactions deemed as leadership behaviors were identified. Using an inductive iterative process, 8 main leadership behaviors were identified. Cohen’s κ coefficient was used to measure inter-rater agreement and calculated as κ = 0.7 (substantial agreement). Each video was watched 4 times, focusing on 1 of the 4 team members per viewing (senior surgeon, senior nurse, trainee surgeon, and trainee nurse). The frequency and types of leadership behavior of each of the 4 team members were recorded. Statistical significance to assess any differences was assessed using analysis of variance, whereby a p < 0.05 was taken to be significant. Leadership behaviors were triangulated with verbal cues and actions from the videos. Results All 3 scenarios were successfully completed. The mean scenario length was 22 minutes. A total of 362 leadership behaviors were recorded from the 12 participants. The most evident leadership behaviors of all team members were adhering to guidelines (which effectively equates to following Advanced Trauma and Life Support/Emergency Management of Severe Burns resuscitation guidelines and hence “maintaining standards”), followed by making decisions. Although in terms of total frequency the senior surgeon engaged in more leadership behaviors compared with the entire team, statistically there was no significant difference between all 4 members within the 8 leadership categories. This analysis highlights that “distributed leadership” was predominant, whereby leadership was “distributed” or “shared” among team members. The leadership behaviors within TBS also seemed to fall in line with the “direction, alignment, and commitment” ontology. Conclusions Effective leadership is essential for successful functioning of work teams and accomplishment of task goals. As the resuscitation of a patient with major burns is a dynamic event, team leaders require flexibility in their leadership behaviors to effectively adapt to changing situations. Understanding leadership behaviors of different team members within an authentic simulation can identify important behaviors required to optimize nontechnical skills in a major resuscitation. Furthermore, attempting to map these behaviors on to leadership models can help further our understanding of leadership theory. Collectively this can aid the development of refined simulation scenarios for team members, and can be extrapolated into other areas of simulation-based team training and interprofessional education.