909 resultados para Computational complexity.
Resumo:
Objectives The intent of this paper is in the examination of health IT implementation processes – the barriers to and facilitators of successful implementation, identification of a beginning set of implementation best practices, the identification of gaps in the health IT implementation body of knowledge, and recommendations for future study and application. Methods A literature review resulted in the identification of six health IT related implementation best practices which were subsequently debated and clarified by participants attending the NI2012 Research Post Conference held in Montreal in the summer of 2012. Using the framework for implementation research (CFIR) to guide their application, the six best practices were applied to two distinct health IT implementation studies to assess their applicability. Results Assessing the implementation processes from two markedly diverse settings illustrated both the challenges and potentials of using standardized implementation processes. In support of what was discovered in the review of the literature, “one size fits all” in health IT implementation is a fallacy, particularly when global diversity is added into the mix. At the same time, several frameworks show promise for use as “scaffolding” to begin to assess best practices, their distinct dimensions, and their applicability for use. Conclusions Health IT innovations, regardless of the implementation setting, requires a close assessment of many dimensions. While there is no “one size fits all”, there are commonalities and best practices that can be blended, adapted, and utilized to improve the process of implementation. This paper examines health IT implementation processes and identifies a beginning set of implementation best practices, which could begin to address gaps in the health IT implementation body of knowledge.
Resumo:
Organizational transformations reliant on successful ICT system developments (continue to) fail to deliver projected benefits even when contemporary governance models are applied rigorously. Modifications to traditional program, project and systems development management methods have produced little material improvement to successful transformation as they are unable to routinely address the complexity and uncertainty of dynamic alignment of IS investments and innovation. Complexity theory provides insight into why this phenomenon occurs and is used to develop a conceptualization of complexity in IS-driven organizational transformations. This research-in-progress aims to identify complexity formulations relevant to organizational transformation. Political/power based influences, interrelated business rules, socio-technical innovation, impacts on stakeholders and emergent behaviors are commonly considered as characterizing complexity while the proposed conceptualization accommodates these as connectivity, irreducibility, entropy and/or information gain in hierarchically approximation and scaling, number of states in a finite automata and/or dimension of attractor, and information and/or variety.
Resumo:
Carbon nanotubes with specific nitrogen doping are proposed for controllable, highly selective, and reversible CO2 capture. Using density functional theory incorporating long-range dispersion corrections, we investigated the adsorption behavior of CO2 on (7,7) single-walled carbon nanotubes (CNTs) with several nitrogen doping configurations and varying charge states. Pyridinic-nitrogen incorporation in CNTs is found to induce an increasing CO2 adsorption strength with electron injecting, leading to a highly selective CO2 adsorption in comparison with N2. This functionality could induce intrinsically reversible CO2 adsorption as capture/release can be controlled by switching the charge carrying state of the system on/off. This phenomenon is verified for a number of different models and theoretical methods, with clear ramifications for the possibility of implementation with a broader class of graphene-based materials. A scheme for the implementation of this remarkable reversible electrocatalytic CO2-capture phenomenon is considered.
Resumo:
Capturing and sequestering carbon dioxide (CO2) can provide a route to partial mitigation of climate change associated with anthropogenic CO2 emissions. Here we report a comprehensive theoretical study of CO2 adsorption on two phases of boron, α-B12 and γ-B28. The theoretical results demonstrate that the electron deficient boron materials, such as α-B12 and γ-B28, can bond strongly with CO2 due to Lewis acid-base interactions because the electron density is higher on their surfaces. In order to evaluate the capacity of these boron materials for CO2 capture, we also performed calculations with various degrees of CO2 coverage. The computational results indicate CO2 capture on the boron phases is a kinetically and thermodynamically feasible process, and therefore from this perspective these boron materials are predicted to be good candidates for CO2 capture.
Resumo:
Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS–SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS–SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65–85% for hybrid PLS–SVM model respectively. Also it was found that the hybrid PLS–SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS–SVM model.
Resumo:
The reaction of the aromatic distonic peroxyl radical cations N-methyl pyridinium-4-peroxyl (PyrOO center dot+) and 4-(N,N,N-trimethyl ammonium)-phenyl peroxyl (AnOO center dot+), with symmetrical dialkyl alkynes 10?ac was studied in the gas phase by mass spectrometry. PyrOO center dot+ and AnOO center dot+ were produced through reaction of the respective distonic aryl radical cations Pyr center dot+ and An center dot+ with oxygen, O2. For the reaction of Pyr center dot+ with O2 an absolute rate coefficient of k1=7.1X10-12 cm3 molecule-1 s-1 and a collision efficiency of 1.2?% was determined at 298 K. The strongly electrophilic PyrOO center dot+ reacts with 3-hexyne and 4-octyne with absolute rate coefficients of khexyne=1.5X10-10 cm3 molecule-1 s-1 and koctyne=2.8X10-10 cm3 molecule-1 s-1, respectively, at 298 K. The reaction of both PyrOO center dot+ and AnOO center dot+ proceeds by radical addition to the alkyne, whereas propargylic hydrogen abstraction was observed as a very minor pathway only in the reactions involving PyrOO center dot+. A major reaction pathway of the vinyl radicals 11 formed upon PyrOO center dot+ addition to the alkynes involves gamma-fragmentation of the peroxy O?O bond and formation of PyrO center dot+. The PyrO center dot+ is rapidly trapped by intermolecular hydrogen abstraction, presumably from a propargylic methylene group in the alkyne. The reaction of the less electrophilic AnOO center dot+ with alkynes is considerably slower and resulted in formation of AnO center dot+ as the only charged product. These findings suggest that electrophilic aromatic peroxyl radicals act as oxygen atom donors, which can be used to generate alpha-oxo carbenes 13 (or isomeric species) from alkynes in a single step. Besides gamma-fragmentation, a number of competing unimolecular dissociative reactions also occur in vinyl radicals 11. The potential energy diagrams of these reactions were explored with density functional theory and ab initio methods, which enabled identification of the chemical structures of the most important products.
Resumo:
Proton-bound dimers consisting of two glycerophospholipids with different headgroups were prepared using negative ion electrospray ionization and dissociated in a triple quadrupole mass spectrometer. Analysis of the tandem mass spectra of the dimers using the kinetic method provides, for the first time, an order of acidity for the phospholipid classes in the gas phase of PE < PA << PG < PS < PI. Hybrid density functional calculations on model phospholipids were used to predict the absolute deprotonation enthalpies of the phospholipid classes from isodesmic proton transfer reactions with phosphoric acid. The computational data largely support the experimental acidity trend, with the exception of the relative acidity ranking of the two most acidic phospholipid species. Possible causes of the discrepancy between experiment and theory are discussed and the experimental trend is recommended. The sequence of gas phase acidities for the phospholipid headgroups is found to (1) have little correlation with the relative ionization efficiencies of the phospholipid classes observed in the negative ion electrospray process, and (2) correlate well with fragmentation trends observed upon collisional activation of phospholipid \[M - H](-) anions. (c) 2005 American Society for Mass Spectrometry.
Resumo:
The safety of passengers is a major concern to airports. In the event of crises, having an effective and efficient evacuation process in place can significantly aid in enhancing passenger safety. Hence, it is necessary for airport operators to have an in-depth understanding of the evacuation process of their airport terminal. Although evacuation models have been used in studying pedestrian behaviour for decades, little research has been done in considering the evacuees’ group dynamics and the complexity of the environment. In this paper, an agent-based model is presented to simulate passenger evacuation process. Different exits were allocated to passengers based on their location and security level. The simulation results show that the evacuation time can be influenced by passenger group dynamics. This model also provides a convenient way to design airport evacuation strategy and examine its efficiency. The model was created using AnyLogic software and its parameters were initialised using recent research data published in the literature.
Resumo:
This paper contributes to conversations about the funding and quality of education research. The paper proceeds in two parts. Part I sets the context by presenting an historical analysis of funding allocations made to Education research through the ARC’s Discovery projects scheme between the years 2002 and 2014, and compares these trends to allocations made to another field within the Social, Behavioural and Economic Sciences assessment panel: Psychology and Cognitive Science. Part II highlights the consequences of underfunding education research by presenting evidence from an Australian Research Council Discovery project that is tracking the experiences of disaffected students who are referred to behaviour schools. The re-scoping decisions that became necessary and the incidental costs that accrue from complications that occur in the field are illustrated and discussed through vignettes of research with “ghosts” who don’t like school but who do like lollies, chess and Lego.
Resumo:
Unsaturated water flow in soil is commonly modelled using Richards’ equation, which requires the hydraulic properties of the soil (e.g., porosity, hydraulic conductivity, etc.) to be characterised. Naturally occurring soils, however, are heterogeneous in nature, that is, they are composed of a number of interwoven homogeneous soils each with their own set of hydraulic properties. When the length scale of these soil heterogeneities is small, numerical solution of Richards’ equation is computationally impractical due to the immense effort and refinement required to mesh the actual heterogeneous geometry. A classic way forward is to use a macroscopic model, where the heterogeneous medium is replaced with a fictitious homogeneous medium, which attempts to give the average flow behaviour at the macroscopic scale (i.e., at a scale much larger than the scale of the heterogeneities). Using the homogenisation theory, a macroscopic equation can be derived that takes the form of Richards’ equation with effective parameters. A disadvantage of the macroscopic approach, however, is that it fails in cases when the assumption of local equilibrium does not hold. This limitation has seen the introduction of two-scale models that include at each point in the macroscopic domain an additional flow equation at the scale of the heterogeneities (microscopic scale). This report outlines a well-known two-scale model and contributes to the literature a number of important advances in its numerical implementation. These include the use of an unstructured control volume finite element method and image-based meshing techniques, that allow for irregular micro-scale geometries to be treated, and the use of an exponential time integration scheme that permits both scales to be resolved simultaneously in a completely coupled manner. Numerical comparisons against a classical macroscopic model confirm that only the two-scale model correctly captures the important features of the flow for a range of parameter values.
Resumo:
Business processes are an important instrument for understanding and improving how companies provide goods and services to customers. Therefore, many companies have documented their business processes well, often in the Event-driven Process Chains (EPC). Unfortunately, in many cases the resulting EPCs are rather complex, so that the overall process logic is hidden in low level process details. This paper proposes abstraction mechanisms for process models that aim to reduce their complexity, while keeping the overall process structure. We assume that functions are marked with efforts and splits are marked with probabilities. This information is used to separate important process parts from less important ones. Real world process models are used to validate the approach.
Resumo:
As the level of autonomy in Unmanned Aircraft Systems (UAS) increases, there is an imperative need for developing methods to assess robust autonomy. This paper focuses on the computations that lead to a set of measures of robust autonomy. These measures are the probabilities that selected performance indices related to the mission requirements and airframe capabilities remain within regions of acceptable performance.
A low-complexity flight controller for Unmanned Aircraft Systems with constrained control allocation
Resumo:
In this paper, we propose a framework for joint allocation and constrained control design of flight controllers for Unmanned Aircraft Systems (UAS). The actuator configuration is used to map actuator constraint set into the space of the aircraft generalised forces. By constraining the demanded generalised forces, we ensure that the allocation problem is always feasible; and therefore, it can be solved without constraints. This leads to an allocation problem that does not require on-line numerical optimisation. Furthermore, since the controller handles the constraints, and there is no need to implement heuristics to inform the controller about actuator saturation. The latter is fundamental for avoiding Pilot Induced Oscillations (PIO) in remotely operated UAS due to the rate limit on the aircraft control surfaces.
Resumo:
The planning of IMRT treatments requires a compromise between dose conformity (complexity) and deliverability. This study investigates established and novel treatment complexity metrics for 122 IMRT beams from prostate treatment plans. The Treatment and Dose Assessor software was used to extract the necessary data from exported treatment plan files and calculate the metrics. For most of the metrics, there was strong overlap between the calculated values for plans that passed and failed their quality assurance (QA) tests. However, statistically significant variation between plans that passed and failed QA measurements was found for the established modulation index and for a novel metric describing the proportion of small apertures in each beam. The ‘small aperture score’ provided threshold values which successfully distinguished deliverable treatment plans from plans that did not pass QA, with a low false negative rate.