932 resultados para Modified Direct Analysis Method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Development of reliable methods for optimised energy storage and generation is one of the most imminent challenges in modern power systems. In this paper an adaptive approach to load leveling problem using novel dynamic models based on the Volterra integral equations of the first kind with piecewise continuous kernels. These integral equations efficiently solve such inverse problem taking into account both the time dependent efficiencies and the availability of generation/storage of each energy storage technology. In this analysis a direct numerical method is employed to find the least-cost dispatch of available storages. The proposed collocation type numerical method has second order accuracy and enjoys self-regularization properties, which is associated with confidence levels of system demand. This adaptive approach is suitable for energy storage optimisation in real time. The efficiency of the proposed methodology is demonstrated on the Single Electricity Market of Republic of Ireland and Northern Ireland.

Relevância:

100.00% 100.00%

Publicador:

Resumo:


Objective There is limited evidence regarding the quality of prescribing for children in primary care. Several prescribing criteria (indicators) have been developed to assess the appropriateness of prescribing in older and middle-aged adults but few are relevant to children. The objective of this study was to develop a set of prescribing indicators that can be applied to prescribing or dispensing data sets to determine the prevalence of potentially inappropriate prescribing in children (PIPc) in primary care settings.


Design Two-round modified Delphi consensus method.


Setting Irish and UK general practice.


Participants A project steering group consisting of academic and clinical general practitioners (GPs) and pharmacists was formed to develop a list of indicators from literature review and clinical expertise. 15 experts consisting of GPs, pharmacists and paediatricians from the Republic of Ireland and the UK formed the Delphi panel.


Results 47 indicators were reviewed by the project steering group and 16 were presented to the Delphi panel. In the first round of this exercise, consensus was achieved on nine of these indicators. Of the remaining seven indicators, two were removed following review of expert panel comments and discussion of the project steering group. The second round of the Delphi process focused on the remaining five indicators, which were amended based on first round feedback. Three indicators were accepted following the second round of the Delphi process and the remaining two indicators were removed. The final list consisted of 12 indicators categorised by respiratory system (n=6), gastrointestinal system (n=2), neurological system (n=2) and dermatological system (n=2).


Conclusions The PIPc indicators are a set of prescribing criteria developed for use in children in primary care in the absence of clinical information. The utility of these criteria will be tested in further studies using prescribing databases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Solving microkinetics of catalytic systems, which bridges microscopic processes and macroscopic reaction rates, is currently vital for understanding catalysis in silico. However, traditional microkinetic solvers possess several drawbacks that make the process slow and unreliable for complicated catalytic systems. In this paper, a new approach, the so-called reversibility iteration method (RIM), is developed to solve microkinetics for catalytic systems. Using the chemical potential notation we previously proposed to simplify the kinetic framework, the catalytic systems can be analytically illustrated to be logically equivalent to the electric circuit, and the reaction rate and coverage can be calculated by updating the values of reversibilities. Compared to the traditional modified Newton iteration method (NIM), our method is not sensitive to the initial guess of the solution and typically requires fewer iteration steps. Moreover, the method does not require arbitrary-precision arithmetic and has a higher probability of successfully solving the system. These features make it ∼1000 times faster than the modified Newton iteration method for the systems we tested. Moreover, the derived concept and the mathematical framework presented in this work may provide new insight into catalytic reaction networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-08

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this dissertation, there are developed different analytical strategies to discover and characterize mammalian brain peptides using small amount of tissues. The magnocellular neurons of rat supraoptic nucleus in tissue and cell culture served as the main model to study neuropeptides, in addition to hippocampal neurons and mouse embryonic pituitaries. The neuropeptidomcis studies described here use different extraction methods on tissue or cell culture combined with mass spectrometry (MS) techniques, matrix-assisted laser desorption/ionization (MALDI) and electrospray ionization (ESI). These strategies lead to the identification of multiple peptides from the rat/mouse brain in tissue and cell cultures, including novel compounds One of the goals in this dissertation was to optimize sample preparations on samples isolated from well-defined brain regions for mass spectrometric analysis. Here, the neuropeptidomics study of the SON resulted in the identification of 85 peptides, including 20 unique peptides from known prohormones. This study includes mass spectrometric analysis even from individually isolated magnocellular neuroendocrine cells, where vasopressin and several other peptides are detected. At the same time, it was shown that the same approach could be applied to analyze peptides isolated from a similar hypothalamic region, the suprachiasmatic nucleus (SCN). Although there were some overlaps regarding the detection of the peptides in the two brain nuclei, different peptides were detected specific to each nucleus. Among other peptides, provasopressin fragments were specifically detected in the SON while angiotensin I, somatostatin-14, neurokinin B, galanin, and vasoactive-intestinal peptide (VIP) were detected in the SCN only. Lists of peptides were generated from both brain regions for comparison of the peptidome of SON and SCN nuclei. Moving from analysis of magnocellular neurons in tissue to cell culture, the direct peptidomics of the magnocellular and hippocampal neurons led to the detection of 10 peaks that were assigned to previously characterized peptides and 17 peaks that remain unassigned. Peptides from the vasopressin prohormone and secretogranin-2 are attributed to magnocellular neurons, whereas neurokinin A, peptide J, and neurokinin B are attributed to cultured hippocampal neurons. This approach enabled the elucidation of cell-specific prohormone processing and the discovery of cell-cell signaling peptides. The peptides with roles in the development of the pituitary were analyzed using transgenic mice. Hes1 KO is a genetically modified mouse that lives only e18.5 (embryonic days). Anterior pituitaries of Hes1 null mice exhibit hypoplasia due to increased cell death and reduced proliferation and in the intermediate lobe, the cells differentiate abnormally into somatotropes instead of melanotropes. These previous findings demonstrate that Hes1 has multiple roles in pituitary development, cell differentiation, and cell fate. AVP was detected in all samples. Interestingly, somatostatin [92-100] and provasopressin [151-168] were detected in the mutant but not in the wild type or heterozygous pituitaries while somatostatin-14 was detected only in the heterozygous pituitary. In addition, the putative peptide corresponding to m/z 1330.2 and POMC [205-222] are detected in the mutant and heterozygous pituitaries, but not in the wild type. These results indicate that Hes1 influences the processing of different prohormones having possible roles during development and opens new directions for further developmental studies. This research demonstrates the robust capabilities of MS, which ensures the unbiased direct analysis of peptides extracted from complex biological systems and allows addressing important questions to understand cell-cell signaling in the brain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Chronic kidney disease (CKD) and atrial fibrillation (AF) frequently coexist. However, the extent to which CKD increases the risk of thromboembolism in patients with nonvalvular AF and the benefits of anticoagulation in this group remain unclear. We addressed the role of CKD in the prediction of thromboembolic events and the impact of anticoagulation using a meta-analysis method. Data sources included MEDLINE, EMBASE, and Cochrane (from inception to January 2014). Three independent reviewers selected studies. Descriptive and quantitative information was extracted from each selected study and a random-effects meta-analysis was performed. After screening 962 search results, 19 studies were considered eligible. Among patients with AF, the presence of CKD resulted in an increased risk of thromboembolism (hazard ratio [HR] 1.46, 95% confidence interval [CI] 1.20 to 1.76, p = 0.0001), particularly in case of end-stage CKD (HR 1.83, 95% CI 1.56 to 2.14, p <0.00001). Warfarin decreased the incidence of thromboembolic events in patients with non-end-stage CKD (HR 0.39, 95% CI 0.18 to 0.86, p <0.00001). Recent data on novel oral anticoagulants suggested a higher efficacy of these agents compared with warfarin (HR 0.80, 95% CI 0.66 to 0.96, p = 0.02) and aspirin (HR 0.32, 95% CI 0.19 to 0.55, p <0.0001) in treating non-end-stage CKD. In conclusion, the presence of CKD in patients with AF is associated with an almost 50% increased thromboembolic risk, which can be effectively decreased with appropriate antithrombotic therapy. Further prospective studies are needed to better evaluate the interest of anticoagulation in patients with severe CKD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cranial cruciate ligament (CCL) deficiency is the leading cause of lameness affecting the stifle joints of large breed dogs, especially Labrador Retrievers. Although CCL disease has been studied extensively, its exact pathogenesis and the primary cause leading to CCL rupture remain controversial. However, weakening secondary to repetitive microtrauma is currently believed to cause the majority of CCL instabilities diagnosed in dogs. Techniques of gait analysis have become the most productive tools to investigate normal and pathological gait in human and veterinary subjects. The inverse dynamics analysis approach models the limb as a series of connected linkages and integrates morphometric data to yield information about the net joint moment, patterns of muscle power and joint reaction forces. The results of these studies have greatly advanced our understanding of the pathogenesis of joint diseases in humans. A muscular imbalance between the hamstring and quadriceps muscles has been suggested as a cause for anterior cruciate ligament rupture in female athletes. Based on these findings, neuromuscular training programs leading to a relative risk reduction of up to 80% has been designed. In spite of the cost and morbidity associated with CCL disease and its management, very few studies have focused on the inverse dynamics gait analysis of this condition in dogs. The general goals of this research were (1) to further define gait mechanism in Labrador Retrievers with and without CCL-deficiency, (2) to identify individual dogs that are susceptible to CCL disease, and (3) to characterize their gait. The mass, location of the center of mass (COM), and mass moment of inertia of hind limb segments were calculated using a noninvasive method based on computerized tomography of normal and CCL-deficient Labrador Retrievers. Regression models were developed to determine predictive equations to estimate body segment parameters on the basis of simple morphometric measurements, providing a basis for nonterminal studies of inverse dynamics of the hind limbs in Labrador Retrievers. Kinematic, ground reaction forces (GRF) and morphometric data were combined in an inverse dynamics approach to compute hock, stifle and hip net moments, powers and joint reaction forces (JRF) while trotting in normal, CCL-deficient or sound contralateral limbs. Reductions in joint moment, power, and loads observed in CCL-deficient limbs were interpreted as modifications adopted to reduce or avoid painful mobilization of the injured stifle joint. Lameness resulting from CCL disease affected predominantly reaction forces during the braking phase and the extension during push-off. Kinetics also identified a greater joint moment and power of the contralateral limbs compared with normal, particularly of the stifle extensor muscles group, which may correlate with the lameness observed, but also with the predisposition of contralateral limbs to CCL deficiency in dogs. For the first time, surface EMG patterns of major hind limb muscles during trotting gait of healthy Labrador Retrievers were characterized and compared with kinetic and kinematic data of the stifle joint. The use of surface EMG highlighted the co-contraction patterns of the muscles around the stifle joint, which were documented during transition periods between flexion and extension of the joint, but also during the flexion observed in the weight bearing phase. Identification of possible differences in EMG activation characteristics between healthy patients and dogs with or predisposed to orthopedic and neurological disease may help understanding the neuromuscular abnormality and gait mechanics of such disorders in the future. Conformation parameters, obtained from femoral and tibial radiographs, hind limb CT images, and dual-energy X-ray absorptiometry, of hind limbs predisposed to CCL deficiency were compared with the conformation parameters from hind limbs at low risk. A combination of tibial plateau angle and femoral anteversion angle measured on radiographs was determined optimal for discriminating predisposed and non-predisposed limbs for CCL disease in Labrador Retrievers using a receiver operating characteristic curve analysis method. In the future, the tibial plateau angle (TPA) and femoral anteversion angle (FAA) may be used to screen dogs suspected of being susceptible to CCL disease. Last, kinematics and kinetics across the hock, stifle and hip joints in Labrador Retrievers presumed to be at low risk based on their radiographic TPA and FAA were compared to gait data from dogs presumed to be predisposed to CCL disease for overground and treadmill trotting gait. For overground trials, extensor moment at the hock and energy generated around the hock and stifle joints were increased in predisposed limbs compared to non predisposed limbs. For treadmill trials, dogs qualified as predisposed to CCL disease held their stifle at a greater degree of flexion, extended their hock less, and generated more energy around the stifle joints while trotting on a treadmill compared with dogs at low risk. This characterization of the gait mechanics of Labrador Retrievers at low risk or predisposed to CCL disease may help developing and monitoring preventive exercise programs to decrease gastrocnemius dominance and strengthened the hamstring muscle group.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analysis methods for electrochemical etching baths consisting of various concentrations of hydrofluoric acid (HF) and an additional organic surface wetting agent are presented. These electrolytes are used for the formation of meso- and macroporous silicon. Monitoring the etching bath composition requires at least one method each for the determination of the HF concentration and the organic content of the bath. However, it is a precondition that the analysis equipment withstands the aggressive HF. Titration and a fluoride ion-selective electrode are used for the determination of the HF and a cuvette test method for the analysis of the organic content, respectively. The most suitable analysis method is identified depending on the components in the electrolyte with the focus on capability of resistance against the aggressive HF.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the directional distribution of heavy neutral atoms in the heliosphere by using heavy neutral maps generated with the IBEX-Lo instrument over three years from 2009 to 2011. The interstellar neutral (ISN) O&Ne gas flow was found in the first-year heavy neutral map at 601 keV and its flow direction and temperature were studied. However, due to the low counting statistics, researchers have not treated the full sky maps in detail. The main goal of this study is to evaluate the statistical significance of each pixel in the heavy neutral maps to get a better understanding of the directional distribution of heavy neutral atoms in the heliosphere. Here, we examine three statistical analysis methods: the signal-to-noise filter, the confidence limit method, and the cluster analysis method. These methods allow us to exclude background from areas where the heavy neutral signal is statistically significant. These methods also allow the consistent detection of heavy neutral atom structures. The main emission feature expands toward lower longitude and higher latitude from the observational peak of the ISN O&Ne gas flow. We call this emission the extended tail. It may be an imprint of the secondary oxygen atoms generated by charge exchange between ISN hydrogen atoms and oxygen ions in the outer heliosheath.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study mainly aims to provide an inter-industry analysis through the subdivision of various industries in flow of funds (FOF) accounts. Combined with the Financial Statement Analysis data from 2004 and 2005, the Korean FOF accounts are reconstructed to form "from-whom-to-whom" basis FOF tables, which are composed of 115 institutional sectors and correspond to tables and techniques of input–output (I–O) analysis. First, power of dispersion indices are obtained by applying the I–O analysis method. Most service and IT industries, construction, and light industries in manufacturing are included in the first quadrant group, whereas heavy and chemical industries are placed in the fourth quadrant since their power indices in the asset-oriented system are comparatively smaller than those of other institutional sectors. Second, investments and savings, which are induced by the central bank, are calculated for monetary policy evaluations. Industries are bifurcated into two groups to compare their features. The first group refers to industries whose power of dispersion in the asset-oriented system is greater than 1, whereas the second group indicates that their index is less than 1. We found that the net induced investments (NII)–total liabilities ratios of the first group show levels half those of the second group since the former's induced savings are obviously greater than the latter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thanks to the advanced technologies and social networks that allow the data to be widely shared among the Internet, there is an explosion of pervasive multimedia data, generating high demands of multimedia services and applications in various areas for people to easily access and manage multimedia data. Towards such demands, multimedia big data analysis has become an emerging hot topic in both industry and academia, which ranges from basic infrastructure, management, search, and mining to security, privacy, and applications. Within the scope of this dissertation, a multimedia big data analysis framework is proposed for semantic information management and retrieval with a focus on rare event detection in videos. The proposed framework is able to explore hidden semantic feature groups in multimedia data and incorporate temporal semantics, especially for video event detection. First, a hierarchical semantic data representation is presented to alleviate the semantic gap issue, and the Hidden Coherent Feature Group (HCFG) analysis method is proposed to capture the correlation between features and separate the original feature set into semantic groups, seamlessly integrating multimedia data in multiple modalities. Next, an Importance Factor based Temporal Multiple Correspondence Analysis (i.e., IF-TMCA) approach is presented for effective event detection. Specifically, the HCFG algorithm is integrated with the Hierarchical Information Gain Analysis (HIGA) method to generate the Importance Factor (IF) for producing the initial detection results. Then, the TMCA algorithm is proposed to efficiently incorporate temporal semantics for re-ranking and improving the final performance. At last, a sampling-based ensemble learning mechanism is applied to further accommodate the imbalanced datasets. In addition to the multimedia semantic representation and class imbalance problems, lack of organization is another critical issue for multimedia big data analysis. In this framework, an affinity propagation-based summarization method is also proposed to transform the unorganized data into a better structure with clean and well-organized information. The whole framework has been thoroughly evaluated across multiple domains, such as soccer goal event detection and disaster information management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Maintenance of transport infrastructure assets is widely advocated as the key in minimizing current and future costs of the transportation network. While effective maintenance decisions are often a result of engineering skills and practical knowledge, efficient decisions must also account for the net result over an asset's life-cycle. One essential aspect in the long term perspective of transport infrastructure maintenance is to proactively estimate maintenance needs. In dealing with immediate maintenance actions, support tools that can prioritize potential maintenance candidates are important to obtain an efficient maintenance strategy. This dissertation consists of five individual research papers presenting a microdata analysis approach to transport infrastructure maintenance. Microdata analysis is a multidisciplinary field in which large quantities of data is collected, analyzed, and interpreted to improve decision-making. Increased access to transport infrastructure data enables a deeper understanding of causal effects and a possibility to make predictions of future outcomes. The microdata analysis approach covers the complete process from data collection to actual decisions and is therefore well suited for the task of improving efficiency in transport infrastructure maintenance. Statistical modeling was the selected analysis method in this dissertation and provided solutions to the different problems presented in each of the five papers. In Paper I, a time-to-event model was used to estimate remaining road pavement lifetimes in Sweden. In Paper II, an extension of the model in Paper I assessed the impact of latent variables on road lifetimes; displaying the sections in a road network that are weaker due to e.g. subsoil conditions or undetected heavy traffic. The study in Paper III incorporated a probabilistic parametric distribution as a representation of road lifetimes into an equation for the marginal cost of road wear. Differentiated road wear marginal costs for heavy and light vehicles are an important information basis for decisions regarding vehicle miles traveled (VMT) taxation policies. In Paper IV, a distribution based clustering method was used to distinguish between road segments that are deteriorating and road segments that have a stationary road condition. Within railway networks, temporary speed restrictions are often imposed because of maintenance and must be addressed in order to keep punctuality. The study in Paper V evaluated the empirical effect on running time of speed restrictions on a Norwegian railway line using a generalized linear mixed model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El presente trabajo tuvo como objetivo evaluar la existencia de la relación entre la atrofia cortical difusa objetivada por neuroimagenes cerebrales y desempeños cognitivos determinados mediante la aplicación de pruebas neuropsicológicas que evalúan memoria de trabajo, razonamiento simbólico verbal y memoria anterógrada declarativa. Participaron 114 sujetos reclutados en el Hospital Universitario Mayor Méderi de la ciudad de Bogotá mediante muestreo de conveniencia. Los resultados arrojaron diferencias significativas entre los dos grupos (pacientes con diagnóstico de atrofia cortical difusa y pacientes con neuroimagenes interpretadas como dentro de los límites normales) en todas las pruebas neuropsicológicas aplicadas. Respecto a las variables demográficas se pudo observar que el grado de escolaridad contribuye como factor neuroprotector de un posible deterioro cognitivo. Tales hallazgos son importantes para determinar protocoles tempranos de detección de posible instalación de enfermedades neurodegenerativas primarias.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the development of a combined experimental and numerical approach to study the anaerobic digestion of both the wastes produced in a biorefinery using yeast for biodiesel production and the wastes generated in the preceding microbial biomass production. The experimental results show that it is possible to valorise through anaerobic digestion all the tested residues. In the implementation of the numerical model for anaerobic digestion, a procedure for the identification of its parameters needs to be developed. A hybrid search Genetic Algorithm was used, followed by a direct search method. In order to test the procedure for estimation of parameters, first noise-free data was considered and a critical analysis of the results obtain so far was undertaken. As a demonstration of its application, the procedure was applied to experimental data.