837 resultados para integrated-process model
Resumo:
Self-regulation is a coping strategy that allows older drivers to drive safely for longer. Self-regulation depends largely on the ability of drivers to evaluate their own driving. Therefore the success of self-regulation, in terms of driving safety, is influenced by the ability of older drivers to have insight into their declining driving performance. In addition, previous studies suggest that providing feedback to older adults regarding their driving skills may lead them to change their driving behaviour. However, little is currently known about the impact of feedback on older drivers’ self-awareness and their subsequent driving regulatory behaviour. This study explored the process of self-regulation and driving cessation among older drivers using the PAPM as a framework. It also investigated older adults’ perceptions and opinions about receiving feedback in regards to their driving abilities. Qualitative focus groups with 27 participants aged 70 years or more were conducted. Thematic analysis resulted in the development of five main themes; the meaning of driving, changes in driving pattern, feedback, the planning process, and solutions. The analysis also resulted in an initial model of driving self-regulation among older drivers that is informed by the current research and the Precaution Adoption Process Model as the theoretical framework. It identifies a number of social, personal, and environmental factors that can either facilitate or hinder people’s transition between stages of change. The findings from this study suggest that further elaboration of the PAPM is needed to take into account the role of insight and feedback on the process of self-regulation among older drivers.
Resumo:
Business Process Management (BPM) (Dumas et al. 2013) investigates how organizations function and can be improved on the basis of their business processes. The starting point for BPM is that organizational performance is a function of process performance. Thus, BPM proposes a set of methods, techniques and tools to discover, analyze, implement, monitor and control business processes, with the ultimate goal of improving these processes. Most importantly, BPM is not just an organizational management discipline. BPM also studies how technology, and particularly information technology, can effectively support the process improvement effort. In the past two decades the field of BPM has been the focus of extensive research, which spans an increasingly growing scope and advances technology in various directions. The main international forum for state-of-the-art research in this field is the International Conference on Business Process Management, or “BPM” for short—an annual meeting of the aca ...
Resumo:
Semantic priming occurs when a subject is faster in recognising a target word when it is preceded by a related word compared to an unrelated word. The effect is attributed to automatic or controlled processing mechanisms elicited by short or long interstimulus intervals (ISIs) between primes and targets. We employed event-related functional magnetic resonance imaging (fMRI) to investigate blood oxygen level dependent (BOLD) responses associated with automatic semantic priming using an experimental design identical to that used in standard behavioural priming tasks. Prime-target semantic strength was manipulated by using lexical ambiguity primes (e.g., bank) and target words related to dominant or subordinate meaning of the ambiguity. Subjects made speeded lexical decisions (word/nonword) on dominant related, subordinate related, and unrelated word pairs presented randomly with a short ISI. The major finding was a pattern of reduced activity in middle temporal and inferior prefrontal regions for dominant versus unrelated and subordinate versus unrelated comparisons, respectively. These findings are consistent with both a dual process model of semantic priming and recent repetition priming data that suggest that reductions in BOLD responses represent neural priming associated with automatic semantic activation and implicate the left middle temporal cortex and inferior prefrontal cortex in more automatic aspects of semantic processing.
Resumo:
Fusing data from multiple sensing modalities, e.g. laser and radar, is a promising approach to achieve resilient perception in challenging environmental conditions. However, this may lead to \emph{catastrophic fusion} in the presence of inconsistent data, i.e. when the sensors do not detect the same target due to distinct attenuation properties. It is often difficult to discriminate consistent from inconsistent data across sensing modalities using local spatial information alone. In this paper we present a novel consistency test based on the log marginal likelihood of a Gaussian process model that evaluates data from range sensors in a relative manner. A new data point is deemed to be consistent if the model statistically improves as a result of its fusion. This approach avoids the need for absolute spatial distance threshold parameters as required by previous work. We report results from object reconstruction with both synthetic and experimental data that demonstrate an improvement in reconstruction quality, particularly in cases where data points are inconsistent yet spatially proximal.
Resumo:
Modelling fluvial processes is an effective way to reproduce basin evolution and to recreate riverbed morphology. However, due to the complexity of alluvial environments, deterministic modelling of fluvial processes is often impossible. To address the related uncertainties, we derive a stochastic fluvial process model on the basis of the convective Exner equation that uses the statistics (mean and variance) of river velocity as input parameters. These statistics allow for quantifying the uncertainty in riverbed topography, river discharge and position of the river channel. In order to couple the velocity statistics and the fluvial process model, the perturbation method is employed with a non-stationary spectral approach to develop the Exner equation as two separate equations: the first one is the mean equation, which yields the mean sediment thickness, and the second one is the perturbation equation, which yields the variance of sediment thickness. The resulting solutions offer an effective tool to characterize alluvial aquifers resulting from fluvial processes, which allows incorporating the stochasticity of the paleoflow velocity.
Resumo:
Abnormally high price spikes in spot electricity markets represent a significant risk to market participants. As such, a literature has developed that focuses on forecasting the probability of such spike events, moving beyond simply forecasting the level of price. Many univariate time series models have been proposed to dealwith spikes within an individual market region. This paper is the first to develop a multivariate self-exciting point process model for dealing with price spikes across connected regions in the Australian National Electricity Market. The importance of the physical infrastructure connecting the regions on the transmission of spikes is examined. It is found that spikes are transmitted between the regions, and the size of spikes is influenced by the available transmission capacity. It is also found that improved risk estimates are obtained when inter-regional linkages are taken into account.
Resumo:
By quantifying the effects of climatic variability in the sheep grazing lands of north western and western Queensland, the key biological rates of mortality and reproduction can be predicted for sheep. These rates are essential components of a decision support package which can prove a useful management tool for producers, especially if they can easily obtain the necessary predictors. When the sub-models of the GRAZPLAN ruminant biology process model were re-parameterised from Queensland data along with an empirical equation predicting the probability of ewes mating added, the process model predicted the probability of pregnancy well (86% variation explained). Predicting mortality from GRAZPLAN was less successful but an empirical equation based on relative condition of the animal (a measure based on liveweight), pregnancy status and age explained 78% of the variation in mortalities. A crucial predictor in these models was liveweight which is not often recorded on producer properties. Empirical models based on climatic and pasture conditions estimated from the pasture production model GRASP, predicted marking and mortality rates for Mitchell grass (Astrebla sp.) pastures (81% and 63% of the variation explained). These prediction equations were tested against independent data from producer properties and the model successfully validated for Mitchell grass communities.
Resumo:
We explored mental toughness in soccer using a triangulation of data capture involving players (n = 6), coaches (n = 4), and parents (n = 5). Semi-structured interviews, based on a personal construct psychology (Kelly, 1955/1991) framework, were conducted to elicit participants' perspectives on the key characteristics and their contrasts, situations demanding mental toughness, and the behaviours displayed and cognitions employed by mentally tough soccer players. The results from the research provided further evidence that mental toughness is conceptually distinct from other psychological constructs such as hardiness. The findings also supported Gucciardi, Gordon, and Dimmock's (2009) process model of mental toughness. A winning mentality and desire was identified as a key attribute of mentally tough soccer players in addition to other previously reported qualities such as self-belief, physical toughness, work ethic/motivation, and resilience. Key cognitions reported by mentally tough soccer players enabled them to remain focused and competitive during training and matches and highlighted the adoption of several forms of self-talk in dealing with challenging situations. Minor revisions to Gucciardi and colleagues' definition of mental toughness are proposed.
Resumo:
This article presents a method for checking the conformance between an event log capturing the actual execution of a business process, and a model capturing its expected or normative execution. Given a business process model and an event log, the method returns a set of statements in natural language describing the behavior allowed by the process model but not observed in the log and vice versa. The method relies on a unified representation of process models and event logs based on a well-known model of concurrency, namely event structures. Specifically, the problem of conformance checking is approached by folding the input event log into an event structure, unfolding the process model into another event structure, and comparing the two event structures via an error-correcting synchronized product. Each behavioral difference detected in the synchronized product is then verbalized as a natural language statement. An empirical evaluation shows that the proposed method scales up to real-life datasets while producing more concise and higher-level difference descriptions than state-of-the-art conformance checking methods.
Resumo:
Microarrays are high throughput biological assays that allow the screening of thousands of genes for their expression. The main idea behind microarrays is to compute for each gene a unique signal that is directly proportional to the quantity of mRNA that was hybridized on the chip. A large number of steps and errors associated with each step make the generated expression signal noisy. As a result, microarray data need to be carefully pre-processed before their analysis can be assumed to lead to reliable and biologically relevant conclusions. This thesis focuses on developing methods for improving gene signal and further utilizing this improved signal for higher level analysis. To achieve this, first, approaches for designing microarray experiments using various optimality criteria, considering both biological and technical replicates, are described. A carefully designed experiment leads to signal with low noise, as the effect of unwanted variations is minimized and the precision of the estimates of the parameters of interest are maximized. Second, a system for improving the gene signal by using three scans at varying scanner sensitivities is developed. A novel Bayesian latent intensity model is then applied on these three sets of expression values, corresponding to the three scans, to estimate the suitably calibrated true signal of genes. Third, a novel image segmentation approach that segregates the fluorescent signal from the undesired noise is developed using an additional dye, SYBR green RNA II. This technique helped in identifying signal only with respect to the hybridized DNA, and signal corresponding to dust, scratch, spilling of dye, and other noises, are avoided. Fourth, an integrated statistical model is developed, where signal correction, systematic array effects, dye effects, and differential expression, are modelled jointly as opposed to a sequential application of several methods of analysis. The methods described in here have been tested only for cDNA microarrays, but can also, with some modifications, be applied to other high-throughput technologies. Keywords: High-throughput technology, microarray, cDNA, multiple scans, Bayesian hierarchical models, image analysis, experimental design, MCMC, WinBUGS.
Resumo:
The current research proposed a conceptual design framework for airports to obtain flexible departure layouts based on passenger activity analysis obtained from Business Process Models (BPM). BPMs available for airport terminals were used as a design tool in the current research to uncover the relationships existing between spatial layout and corresponding passenger activities. An algorithm has been developed that demonstrates the applicability of the proposed design framework by obtaining relative spatial layouts based on passenger activity analysis. The generated relative spatial layout assists architects in achieving suitable alternative layouts to meet the changing needs of an airport terminal.
Resumo:
Aerosols impact the planet and our daily lives through various effects, perhaps most notably those related to their climatic and health-related consequences. While there are several primary particle sources, secondary new particle formation from precursor vapors is also known to be a frequent, global phenomenon. Nevertheless, the formation mechanism of new particles, as well as the vapors participating in the process, remain a mystery. This thesis consists of studies on new particle formation specifically from the point of view of numerical modeling. A dependence of formation rate of 3 nm particles on the sulphuric acid concentration to the power of 1-2 has been observed. This suggests nucleation mechanism to be of first or second order with respect to the sulphuric acid concentration, in other words the mechanisms based on activation or kinetic collision of clusters. However, model studies have had difficulties in replicating the small exponents observed in nature. The work done in this thesis indicates that the exponents may be lowered by the participation of a co-condensing (and potentially nucleating) low-volatility organic vapor, or by increasing the assumed size of the critical clusters. On the other hand, the presented new and more accurate method for determining the exponent indicates high diurnal variability. Additionally, these studies included several semi-empirical nucleation rate parameterizations as well as a detailed investigation of the analysis used to determine the apparent particle formation rate. Due to their high proportion of the earth's surface area, oceans could potentially prove to be climatically significant sources of secondary particles. In the lack of marine observation data, new particle formation events in a coastal region were parameterized and studied. Since the formation mechanism is believed to be similar, the new parameterization was applied in a marine scenario. The work showed that marine CCN production is feasible in the presence of additional vapors contributing to particle growth. Finally, a new method to estimate concentrations of condensing organics was developed. The algorithm utilizes a Markov chain Monte Carlo method to determine the required combination of vapor concentrations by comparing a measured particle size distribution with one from an aerosol dynamics process model. The evaluation indicated excellent agreement against model data, and initial results with field data appear sound as well.
Resumo:
This thesis contains three subject areas concerning particulate matter in urban area air quality: 1) Analysis of the measured concentrations of particulate matter mass concentrations in the Helsinki Metropolitan Area (HMA) in different locations in relation to traffic sources, and at different times of year and day. 2) The evolution of traffic exhaust originated particulate matter number concentrations and sizes in local street scale are studied by a combination of a dispersion model and an aerosol process model. 3) Some situations of high particulate matter concentrations are analysed with regard to their meteorological origins, especially temperature inversion situations, in the HMA and three other European cities. The prediction of the occurrence of meteorological conditions conducive to elevated particulate matter concentrations in the studied cities is examined. The performance of current numerical weather forecasting models in the case of air pollution episode situations is considered. The study of the ambient measurements revealed clear diurnal variation of the PM10 concentrations in the HMA measurement sites, irrespective of the year and the season of the year. The diurnal variation of local vehicular traffic flows seemed to have no substantial correlation with the PM2.5 concentrations, indicating that the PM10 concentrations were originated mainly from local vehicular traffic (direct emissions and suspension), while the PM2.5 concentrations were mostly of regionally and long-range transported origin. The modelling study of traffic exhaust dispersion and transformation showed that the number concentrations of particles originating from street traffic exhaust undergo a substantial change during the first tens of seconds after being emitted from the vehicle tailpipe. The dilution process was shown to dominate total number concentrations. Minimal effect of both condensation and coagulation was seen in the Aitken mode number concentrations. The included air pollution episodes were chosen on the basis of occurrence in either winter or spring, and having at least partly local origin. In the HMA, air pollution episodes were shown to be linked to predominantly stable atmospheric conditions with high atmospheric pressure and low wind speeds in conjunction with relatively low ambient temperatures. For the other European cities studied, the best meteorological predictors for the elevated concentrations of PM10 were shown to be temporal (hourly) evolutions of temperature inversions, stable atmospheric stability and in some cases, wind speed. Concerning the weather prediction during particulate matter related air pollution episodes, the use of the studied models were found to overpredict pollutant dispersion, leading to underprediction of pollutant concentration levels.
Resumo:
Action, Power and Experience in Organizational Change - A Study of Three Major Corporations This study explores change management and resistance to change as social activities and power displays through worker experiences in three major Finnish corporations. Two important sensitizing concepts were applied. Firstly, Richard Sennett's perspective on work in the new form of capitalism, and its shortcomings - the lack of commitment and freedom accompanied by the disruption to lifelong career planning and the feeling of job insecurity - offered a fruitful starting point for a critical study. Secondly, Michel Foucault's classical concept of power, treated as anecdotal, interactive and nonmeasurable, provided tools for analyzing change-enabling and resisting acts. The study bridges the gap between management and social sciences. The former have usually concentrated on leadership issues, best practices and goal attainment, while the latter have covered worker experiences, power relations and political conflicts. The study was motivated by three research questions. Firstly, why people resist or support changes in their work, work environment or organization, and the kind of analyses these behavioural choices are based on. Secondly, the kind of practical forms which support for, and resistance to change take, and how people choose the different ways of acting. Thirdly, how the people involved experience and describe their own subject position and actions in changing environments. The examination focuses on practical interpretations and action descriptions given by the members of three major Finnish business organizations. The empirical data was collected during a two-year period in the Finnish Post Corporation, the Finnish branch of Vattenfal Group, one of the leading European energy companies, and the Mehiläinen Group, the leading private medical service provider in Finland. It includes 154 non-structured thematic interviews and 309 biographies concentrating on personal experiences of change. All positions and organizational levels were represented. The analysis was conducted using the grounded theory method introduced by Straus and Corbin in three sequential phases, including open, axial and selective coding processes. As a result, there is a hierarchical structure of categories, which is summarized in the process model of change behaviour patterns. Key ingredients are past experiences and future expectations which lead to different change relations and behavioural roles. Ultimately, they contribute to strategic and tactical choices realized as both public and hidden forms of action. The same forms of action can be used in both supporting and resisting change, and there are no specific dividing lines either between employer and employee roles or between different hierarchical positions. In general, however, it is possible to conclude that strategic choices lead more often to public forms of action, whereas tactical choices result in hidden forms. The primary goal of the study was to provide knowledge which has practical applications in everyday business life, HR and change management. The results, therefore, are highly applicable to other organizations as well as to less change-dominated situations, whenever power relations and conflicting interests are present. A sociological thesis on classical business management issues can be of considerable value in revealing the crucial social processes behind behavioural patterns. Keywords: change management, organizational development, organizational resistance, resistance to change, change management, labor relations, organization, leadership
Resumo:
Marja Heinonen s dissertation Verkkomedian käyttö ja tutkiminen. Iltalehti Online 1995-2001 describes the usage of new internet based news service Iltalehti Online during its first years of existence, 1995-2001. The study focuses on the content of the service and users attitudes towards the new media and its contents. Heinonen has also analyzed and described the research methods that can be used in the research of any new media phenomenon when there is no historical perspective to do the research. Heinonen has created a process model for the research of net medium, which is based on a multidimensional approach. She has chosen an iterative research method inspired by Sudweeks and Simoff s CEDA-methodology in which qualitative and quantitative methods take turns both creating results and new research questions. The dissertation discusses and describes the possibilities of combining several research methods in the study of online news media. On general level it discusses the methodological possibilities of researching a completely new media form when there is no historical perspective. The result of these discussions is in favour for the multidimensional methods. The empiric research was built around three cases of Iltalehti Online among its users: log analysis 1996-1999, interviews 1999 and clustering 2000-2001. Even though the results of different cases were somewhat conflicting here are the central results from the analysis of Iltalehti Online 1995-2001: - Reading was strongly determined by the gender. - The structure of Iltalehti Online guided the reading strongly. - People did not make a clear distinction in content between news and entertainment. - Users created new habits in their everyday life during the first years of using Iltalehti Online. These habits were categorized as follows: - break between everyday routines - established habit - new practice within the rhythm of the day - In the clustering of the users sports, culture and celebrities were the most distinguishing contents. Users did not move across these borders as much as within them. The dissertation gives contribution to the development of multidimensional research methods in the field of emerging phenomena in media field. It is also a unique description of a phase of development in media history through an unique research material. There is no such information (logs + demographics) available of any other Finnish online news media. Either from the first years or today.