934 resultados para dynamic methods
Resumo:
The discovery of protein variation is an important strategy in disease diagnosis within the biological sciences. The current benchmark for elucidating information from multiple biological variables is the so called “omics” disciplines of the biological sciences. Such variability is uncovered by implementation of multivariable data mining techniques which come under two primary categories, machine learning strategies and statistical based approaches. Typically proteomic studies can produce hundreds or thousands of variables, p, per observation, n, depending on the analytical platform or method employed to generate the data. Many classification methods are limited by an n≪p constraint, and as such, require pre-treatment to reduce the dimensionality prior to classification. Recently machine learning techniques have gained popularity in the field for their ability to successfully classify unknown samples. One limitation of such methods is the lack of a functional model allowing meaningful interpretation of results in terms of the features used for classification. This is a problem that might be solved using a statistical model-based approach where not only is the importance of the individual protein explicit, they are combined into a readily interpretable classification rule without relying on a black box approach. Here we incorporate statistical dimension reduction techniques Partial Least Squares (PLS) and Principal Components Analysis (PCA) followed by both statistical and machine learning classification methods, and compared them to a popular machine learning technique, Support Vector Machines (SVM). Both PLS and SVM demonstrate strong utility for proteomic classification problems.
Resumo:
Software as a Service (SaaS) is a promising approach for Small and Medium Enterprises (SMEs) firms, in particular those that are focused on growing fast and leveraging new technology, due to the potential benefits arising from its inherent scalability, reduced total cost of ownership and the ease of access to global innovations. This paper proposes a dynamic perspective on IS capabilities to understand and explain SMEs sourcing and levering SaaS. The model is derived from combining the IS capabilities of Feeny and Willcocks (1998) and the dynamic capabilities of Teece (2007) and contextualizing it for SMEs and SaaS. We conclude that SMEs sourcing and leveraging SaaS require leadership, business systems thinking and informed buying for sensing and seizing SaaS opportunities and require leadership and vendor development for transforming in terms of aligning and realigning specific tangible and intangible assets.
Resumo:
This CDROM includes PDFs of presentations on the following topics: "TXDOT Revenue and Expenditure Trends;" "Examine Highway Fund Diversions, & Benchmark Texas Vehicle Registration Fees;" "Evaluation of the JACK Model;" "Future highway construction cost trends;" "Fuel Efficiency Trends and Revenue Impact"
Resumo:
Peeling is an essential phase of post harvesting and processing industry; however undesirable processing losses are unavoidable and always have been the main concern of food processing sector. There are three methods of peeling fruits and vegetables including mechanical, chemical and thermal, depending on the class and type of fruit. By comparison, the mechanical methods are the most preferred; mechanical peeling methods do not create any harmful effects on the tissue and they keep edible portions of produce fresh. The main disadvantage of mechanical peeling is the rate of material loss and deformations. Obviously reducing material losses and increasing the quality of the process has a direct effect on the whole efficiency of food processing industry, this needs more study on technological aspects of these operations. In order to enhance the effectiveness of food industrial practices it is essential to have a clear understanding of material properties and behaviour of tissues under industrial processes. This paper presents the scheme of research that seeks to examine tissue damage of tough skinned vegetables under mechanical peeling process by developing a novel FE model of the process using explicit dynamic finite element analysis approach. A computer model of mechanical peeling process will be developed in this study to stimulate the energy consumption and stress strain interactions of cutter and tissue. The available Finite Element softwares and methods will be applied to establish the model. Improving the knowledge of interactions and involves variables in food operation particularly in peeling process is the main objectives of the proposed study. Understanding of these interrelationships will help researchers and designer of food processing equipments to develop new and more efficient technologies. Presented work intends to review available literature and previous works has been done in this area of research and identify current gap in modelling and simulation of food processes.
Resumo:
Reducing complexity in Information Systems is a main concern in both research and industry. One strategy for reducing complexity is separation of concerns. This strategy advocates separating various concerns, like security and privacy, from the main concern. It results in less complex, easily maintainable, and more reusable Information Systems. Separation of concerns is addressed through the Aspect Oriented paradigm. This paradigm has been well researched and implemented in programming, where languages such as AspectJ have been developed. However, the rsearch on aspect orientation for Business Process Management is still at its beginning. While some efforts have been made proposing Aspect Oriented Business Process Modelling, it has not yet been investigated how to enact such process models in a Workflow Management System. In this paper, we define a set of requirements that specifies the execution of aspect oriented business process models. We create a Coloured Petri Net specification for the semantics of so-called Aspect Service that fulfils these requirements. Such a service extends the capability of a Workflow Management System with support for execution of aspect oriented business process models. The design specification of the Aspect Service is also inspected through state space analysis.
Resumo:
Fractional order dynamics in physics, particularly when applied to diffusion, leads to an extension of the concept of Brown-ian motion through a generalization of the Gaussian probability function to what is termed anomalous diffusion. As MRI is applied with increasing temporal and spatial resolution, the spin dynamics are being examined more closely; such examinations extend our knowledge of biological materials through a detailed analysis of relaxation time distribution and water diffusion heterogeneity. Here the dynamic models become more complex as they attempt to correlate new data with a multiplicity of tissue compartments where processes are often anisotropic. Anomalous diffusion in the human brain using fractional order calculus has been investigated. Recently, a new diffusion model was proposed by solving the Bloch-Torrey equation using fractional order calculus with respect to time and space (see R.L. Magin et al., J. Magnetic Resonance, 190 (2008) 255-270). However effective numerical methods and supporting error analyses for the fractional Bloch-Torrey equation are still limited. In this paper, the space and time fractional Bloch-Torrey equation (ST-FBTE) is considered. The time and space derivatives in the ST-FBTE are replaced by the Caputo and the sequential Riesz fractional derivatives, respectively. Firstly, we derive an analytical solution for the ST-FBTE with initial and boundary conditions on a finite domain. Secondly, we propose an implicit numerical method (INM) for the ST-FBTE, and the stability and convergence of the INM are investigated. We prove that the implicit numerical method for the ST-FBTE is unconditionally stable and convergent. Finally, we present some numerical results that support our theoretical analysis.
Resumo:
In this paper, a class of fractional advection-dispersion models (FADM) is investigated. These models include five fractional advection-dispersion models: the immobile, mobile/immobile time FADM with a temporal fractional derivative 0 < γ < 1, the space FADM with skewness, both the time and space FADM and the time fractional advection-diffusion-wave model with damping with index 1 < γ < 2. They describe nonlocal dependence on either time or space, or both, to explain the development of anomalous dispersion. These equations can be used to simulate regional-scale anomalous dispersion with heavy tails, for example, the solute transport in watershed catchments and rivers. We propose computationally effective implicit numerical methods for these FADM. The stability and convergence of the implicit numerical methods are analyzed and compared systematically. Finally, some results are given to demonstrate the effectiveness of our theoretical analysis.
Resumo:
Objective Although several validated nutritional screening tools have been developed to “triage” inpatients for malnutrition diagnosis and intervention, there continues to be debate in the literature as to which tool/tools clinicians should use in practice. This study compared the accuracy of seven validated screening tools in older medical inpatients against two validated nutritional assessment methods. Methods This was a prospective cohort study of medical inpatients at least 65 y old. Malnutrition screening was conducted using seven tools recommended in evidence-based guidelines. Nutritional status was assessed by an accredited practicing dietitian using the Subjective Global Assessment (SGA) and the Mini-Nutritional Assessment (MNA). Energy intake was observed on a single day during first week of hospitalization. Results In this sample of 134 participants (80 ± 8 y old, 50% women), there was fair agreement between the SGA and MNA (κ = 0.53), with MNA identifying more “at-risk” patients and the SGA better identifying existing malnutrition. Most tools were accurate in identifying patients with malnutrition as determined by the SGA, in particular the Malnutrition Screening Tool and the Nutritional Risk Screening 2002. The MNA Short Form was most accurate at identifying nutritional risk according to the MNA. No tool accurately predicted patients with inadequate energy intake in the hospital. Conclusion Because all tools generally performed well, clinicians should consider choosing a screening tool that best aligns with their chosen nutritional assessment and is easiest to implement in practice. This study confirmed the importance of rescreening and monitoring food intake to allow the early identification and prevention of nutritional decline in patients with a poor intake during hospitalization.
Resumo:
The transmission of bacteria is more likely to occur from wet skin than from dry skin; therefore, the proper drying of hands after washing should be an integral part of the hand hygiene process in health care. This article systematically reviews the research on the hygienic efficacy of different hand-drying methods. A literature search was conducted in April 2011 using the electronic databases PubMed, Scopus, and Web of Science. Search terms used were hand dryer and hand drying. The search was limited to articles published in English from January 1970 through March 2011. Twelve studies were included in the review. Hand-drying effectiveness includes the speed of drying, degree of dryness, effective removal of bacteria, and prevention of cross-contamination. This review found little agreement regarding the relative effectiveness of electric air dryers. However, most studies suggest that paper towels can dry hands efficiently, remove bacteria effectively, and cause less contamination of the washroom environment. From a hygiene viewpoint, paper towels are superior to electric air dryers. Paper towels should be recommended in locations where hygiene is paramount, such as hospitals and clinics.
Resumo:
Purpose: The management of unruptured aneurysms remains controversial as treatment infers potential significant risk to the currently well patient. The decision to treat is based upon aneurysm location, size and abnormal morphology (e.g. bleb formation). A method to predict bleb formation would thus help stratify patient treatment. Our study aims to investigate possible associations between intra-aneurysmal flow dynamics and bleb formation within intracranial aneurysms. Competing theories on aetiology appear in the literature. Our purpose is to further clarify this issue. Methodology: We recruited data from 3D rotational angiograms (3DRA) of 30 patients with cerebral aneurysms and bleb formation. Models representing aneurysms pre-bleb formation were reconstructed by digitally removing the bleb, then computational fluid dynamics simulations were run on both pre and post bleb models. Pulsatile flow conditions and standard boundary conditions were imposed. Results: Aneurysmal flow structure, impingement regions, wall shear stress magnitude and gradients were produced for all models. Correlation of these parameters with bleb formation was sought. Certain CFD parameters show significant inter patient variability, making statistically significant correlation difficult on the partial data subset obtained currently. Conclusion: CFD models are readily producible from 3DRA data. Preliminary results indicate bleb formation appears to be related to regions of high wall shear stress and direct impingement regions of the aneurysm wall.
Resumo:
The present study considered factors influencing teachers' reporting of child sexual abuse (CSA). Conducted in three Australian jurisdictions with different reporting laws and policies, the study focused on teachers' actual past and anticipated future reporting of CSA. A sample of 470 teachers within randomly selected rural and urban schools was surveyed, to identify training and experience; knowledge of reporting legislation and policy; attitudes; and reporting practices. Factors influencing actual past reporting and anticipated future reporting were identified using logistic regression modelling. This is the first study to simultaneously examine the effect of important influences in reporting practice using both retrospective and prospective approaches across jurisdictions with different reporting laws. Teachers who have actually reported CSA in the past are more likely have higher levels of policy knowledge, and hold more positive attitudes towards reporting CSA along three specific dimensions: commitment to the reporting role; confidence in the system's effective response to their reporting; and they are more likely to be able to override their concerns about the consequences of their reporting. Teachers indicating intention to report hypothetical scenarios are more likely to hold reasonable grounds for suspecting CSA, to recognise that significant harm has been caused to the child, to know that their school policy requires a report, and to be able to override their concerns about the consequences of their reporting.
Resumo:
Most unsignalised intersection capacity calculation procedures are based on gap acceptance models. Accuracy of critical gap estimation affects accuracy of capacity and delay estimation. Several methods have been published to estimate drivers’ sample mean critical gap, the Maximum Likelihood Estimation (MLE) technique regarded as the most accurate. This study assesses three novel methods; Average Central Gap (ACG) method, Strength Weighted Central Gap method (SWCG), and Mode Central Gap method (MCG), against MLE for their fidelity in rendering true sample mean critical gaps. A Monte Carlo event based simulation model was used to draw the maximum rejected gap and accepted gap for each of a sample of 300 drivers across 32 simulation runs. Simulation mean critical gap is varied between 3s and 8s, while offered gap rate is varied between 0.05veh/s and 0.55veh/s. This study affirms that MLE provides a close to perfect fit to simulation mean critical gaps across a broad range of conditions. The MCG method also provides an almost perfect fit and has superior computational simplicity and efficiency to the MLE. The SWCG method performs robustly under high flows; however, poorly under low to moderate flows. Further research is recommended using field traffic data, under a variety of minor stream and major stream flow conditions for a variety of minor stream movement types, to compare critical gap estimates using MLE against MCG. Should the MCG method prove as robust as MLE, serious consideration should be given to its adoption to estimate critical gap parameters in guidelines.
Resumo:
Many substation applications require accurate time-stamping. The performance of systems such as Network Time Protocol (NTP), IRIG-B and one pulse per second (1-PPS) have been sufficient to date. However, new applications, including IEC 61850-9-2 process bus and phasor measurement, require accuracy of one microsecond or better. Furthermore, process bus applications are taking time synchronisation out into high voltage switchyards where cable lengths may have an impact on timing accuracy. IEEE Std 1588, Precision Time Protocol (PTP), is the means preferred by the smart grid standardisation roadmaps (from both the IEC and US National Institute of Standards and Technology) of achieving this higher level of performance, and integrates well into Ethernet based substation automation systems. Significant benefits of PTP include automatic path length compensation, support for redundant time sources and the cabling efficiency of a shared network. This paper benchmarks the performance of established IRIG-B and 1-PPS synchronisation methods over a range of path lengths representative of a transmission substation. The performance of PTP using the same distribution system is then evaluated and compared to the existing methods to determine if the performance justifies the additional complexity. Experimental results show that a PTP timing system maintains the synchronising performance of 1-PPS and IRIG-B timing systems, when using the same fibre optic cables, and further meets the needs of process buses in large substations.