852 resultados para Weighted histogram analysis method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Chronic kidney disease (CKD) and atrial fibrillation (AF) frequently coexist. However, the extent to which CKD increases the risk of thromboembolism in patients with nonvalvular AF and the benefits of anticoagulation in this group remain unclear. We addressed the role of CKD in the prediction of thromboembolic events and the impact of anticoagulation using a meta-analysis method. Data sources included MEDLINE, EMBASE, and Cochrane (from inception to January 2014). Three independent reviewers selected studies. Descriptive and quantitative information was extracted from each selected study and a random-effects meta-analysis was performed. After screening 962 search results, 19 studies were considered eligible. Among patients with AF, the presence of CKD resulted in an increased risk of thromboembolism (hazard ratio [HR] 1.46, 95% confidence interval [CI] 1.20 to 1.76, p = 0.0001), particularly in case of end-stage CKD (HR 1.83, 95% CI 1.56 to 2.14, p <0.00001). Warfarin decreased the incidence of thromboembolic events in patients with non-end-stage CKD (HR 0.39, 95% CI 0.18 to 0.86, p <0.00001). Recent data on novel oral anticoagulants suggested a higher efficacy of these agents compared with warfarin (HR 0.80, 95% CI 0.66 to 0.96, p = 0.02) and aspirin (HR 0.32, 95% CI 0.19 to 0.55, p <0.0001) in treating non-end-stage CKD. In conclusion, the presence of CKD in patients with AF is associated with an almost 50% increased thromboembolic risk, which can be effectively decreased with appropriate antithrombotic therapy. Further prospective studies are needed to better evaluate the interest of anticoagulation in patients with severe CKD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cranial cruciate ligament (CCL) deficiency is the leading cause of lameness affecting the stifle joints of large breed dogs, especially Labrador Retrievers. Although CCL disease has been studied extensively, its exact pathogenesis and the primary cause leading to CCL rupture remain controversial. However, weakening secondary to repetitive microtrauma is currently believed to cause the majority of CCL instabilities diagnosed in dogs. Techniques of gait analysis have become the most productive tools to investigate normal and pathological gait in human and veterinary subjects. The inverse dynamics analysis approach models the limb as a series of connected linkages and integrates morphometric data to yield information about the net joint moment, patterns of muscle power and joint reaction forces. The results of these studies have greatly advanced our understanding of the pathogenesis of joint diseases in humans. A muscular imbalance between the hamstring and quadriceps muscles has been suggested as a cause for anterior cruciate ligament rupture in female athletes. Based on these findings, neuromuscular training programs leading to a relative risk reduction of up to 80% has been designed. In spite of the cost and morbidity associated with CCL disease and its management, very few studies have focused on the inverse dynamics gait analysis of this condition in dogs. The general goals of this research were (1) to further define gait mechanism in Labrador Retrievers with and without CCL-deficiency, (2) to identify individual dogs that are susceptible to CCL disease, and (3) to characterize their gait. The mass, location of the center of mass (COM), and mass moment of inertia of hind limb segments were calculated using a noninvasive method based on computerized tomography of normal and CCL-deficient Labrador Retrievers. Regression models were developed to determine predictive equations to estimate body segment parameters on the basis of simple morphometric measurements, providing a basis for nonterminal studies of inverse dynamics of the hind limbs in Labrador Retrievers. Kinematic, ground reaction forces (GRF) and morphometric data were combined in an inverse dynamics approach to compute hock, stifle and hip net moments, powers and joint reaction forces (JRF) while trotting in normal, CCL-deficient or sound contralateral limbs. Reductions in joint moment, power, and loads observed in CCL-deficient limbs were interpreted as modifications adopted to reduce or avoid painful mobilization of the injured stifle joint. Lameness resulting from CCL disease affected predominantly reaction forces during the braking phase and the extension during push-off. Kinetics also identified a greater joint moment and power of the contralateral limbs compared with normal, particularly of the stifle extensor muscles group, which may correlate with the lameness observed, but also with the predisposition of contralateral limbs to CCL deficiency in dogs. For the first time, surface EMG patterns of major hind limb muscles during trotting gait of healthy Labrador Retrievers were characterized and compared with kinetic and kinematic data of the stifle joint. The use of surface EMG highlighted the co-contraction patterns of the muscles around the stifle joint, which were documented during transition periods between flexion and extension of the joint, but also during the flexion observed in the weight bearing phase. Identification of possible differences in EMG activation characteristics between healthy patients and dogs with or predisposed to orthopedic and neurological disease may help understanding the neuromuscular abnormality and gait mechanics of such disorders in the future. Conformation parameters, obtained from femoral and tibial radiographs, hind limb CT images, and dual-energy X-ray absorptiometry, of hind limbs predisposed to CCL deficiency were compared with the conformation parameters from hind limbs at low risk. A combination of tibial plateau angle and femoral anteversion angle measured on radiographs was determined optimal for discriminating predisposed and non-predisposed limbs for CCL disease in Labrador Retrievers using a receiver operating characteristic curve analysis method. In the future, the tibial plateau angle (TPA) and femoral anteversion angle (FAA) may be used to screen dogs suspected of being susceptible to CCL disease. Last, kinematics and kinetics across the hock, stifle and hip joints in Labrador Retrievers presumed to be at low risk based on their radiographic TPA and FAA were compared to gait data from dogs presumed to be predisposed to CCL disease for overground and treadmill trotting gait. For overground trials, extensor moment at the hock and energy generated around the hock and stifle joints were increased in predisposed limbs compared to non predisposed limbs. For treadmill trials, dogs qualified as predisposed to CCL disease held their stifle at a greater degree of flexion, extended their hock less, and generated more energy around the stifle joints while trotting on a treadmill compared with dogs at low risk. This characterization of the gait mechanics of Labrador Retrievers at low risk or predisposed to CCL disease may help developing and monitoring preventive exercise programs to decrease gastrocnemius dominance and strengthened the hamstring muscle group.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analysis methods for electrochemical etching baths consisting of various concentrations of hydrofluoric acid (HF) and an additional organic surface wetting agent are presented. These electrolytes are used for the formation of meso- and macroporous silicon. Monitoring the etching bath composition requires at least one method each for the determination of the HF concentration and the organic content of the bath. However, it is a precondition that the analysis equipment withstands the aggressive HF. Titration and a fluoride ion-selective electrode are used for the determination of the HF and a cuvette test method for the analysis of the organic content, respectively. The most suitable analysis method is identified depending on the components in the electrolyte with the focus on capability of resistance against the aggressive HF.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the directional distribution of heavy neutral atoms in the heliosphere by using heavy neutral maps generated with the IBEX-Lo instrument over three years from 2009 to 2011. The interstellar neutral (ISN) O&Ne gas flow was found in the first-year heavy neutral map at 601 keV and its flow direction and temperature were studied. However, due to the low counting statistics, researchers have not treated the full sky maps in detail. The main goal of this study is to evaluate the statistical significance of each pixel in the heavy neutral maps to get a better understanding of the directional distribution of heavy neutral atoms in the heliosphere. Here, we examine three statistical analysis methods: the signal-to-noise filter, the confidence limit method, and the cluster analysis method. These methods allow us to exclude background from areas where the heavy neutral signal is statistically significant. These methods also allow the consistent detection of heavy neutral atom structures. The main emission feature expands toward lower longitude and higher latitude from the observational peak of the ISN O&Ne gas flow. We call this emission the extended tail. It may be an imprint of the secondary oxygen atoms generated by charge exchange between ISN hydrogen atoms and oxygen ions in the outer heliosheath.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we further extend the recently developed adaptive data analysis method, the Sparse Time-Frequency Representation (STFR) method. This method is based on the assumption that many physical signals inherently contain AM-FM representations. We propose a sparse optimization method to extract the AM-FM representations of such signals. We prove the convergence of the method for periodic signals under certain assumptions and provide practical algorithms specifically for the non-periodic STFR, which extends the method to tackle problems that former STFR methods could not handle, including stability to noise and non-periodic data analysis. This is a significant improvement since many adaptive and non-adaptive signal processing methods are not fully capable of handling non-periodic signals. Moreover, we propose a new STFR algorithm to study intrawave signals with strong frequency modulation and analyze the convergence of this new algorithm for periodic signals. Such signals have previously remained a bottleneck for all signal processing methods. Furthermore, we propose a modified version of STFR that facilitates the extraction of intrawaves that have overlaping frequency content. We show that the STFR methods can be applied to the realm of dynamical systems and cardiovascular signals. In particular, we present a simplified and modified version of the STFR algorithm that is potentially useful for the diagnosis of some cardiovascular diseases. We further explain some preliminary work on the nature of Intrinsic Mode Functions (IMFs) and how they can have different representations in different phase coordinates. This analysis shows that the uncertainty principle is fundamental to all oscillating signals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper catalogues the procedures and steps involved in agroclimatic classification. These vary from conventional descriptive methods to modern computer-based numerical techniques. There are three mutually independent numerical classification techniques, namely Ordination, Cluster analysis, and Minimum spanning tree; and under each technique there are several forms of grouping techniques existing. The vhoice of numerical classification procedure differs with the type of data set. In the case of numerical continuous data sets with booth positive and negative values, the simple and least controversial procedures are unweighted pair group method (UPGMA) and weighted pair group method (WPGMA) under clustering techniques with similarity measure obtained either from Gower metric or standardized Euclidean metric. Where the number of attributes are large, these could be reduced to fewer new attributes defined by the principal components or coordinates by ordination technique. The first few components or coodinates explain the maximum variance in the data matrix. These revided attributes are less affected by noise in the data set. It is possible to check misclassifications using minimum spanning tree.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study mainly aims to provide an inter-industry analysis through the subdivision of various industries in flow of funds (FOF) accounts. Combined with the Financial Statement Analysis data from 2004 and 2005, the Korean FOF accounts are reconstructed to form "from-whom-to-whom" basis FOF tables, which are composed of 115 institutional sectors and correspond to tables and techniques of input–output (I–O) analysis. First, power of dispersion indices are obtained by applying the I–O analysis method. Most service and IT industries, construction, and light industries in manufacturing are included in the first quadrant group, whereas heavy and chemical industries are placed in the fourth quadrant since their power indices in the asset-oriented system are comparatively smaller than those of other institutional sectors. Second, investments and savings, which are induced by the central bank, are calculated for monetary policy evaluations. Industries are bifurcated into two groups to compare their features. The first group refers to industries whose power of dispersion in the asset-oriented system is greater than 1, whereas the second group indicates that their index is less than 1. We found that the net induced investments (NII)–total liabilities ratios of the first group show levels half those of the second group since the former's induced savings are obviously greater than the latter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thanks to the advanced technologies and social networks that allow the data to be widely shared among the Internet, there is an explosion of pervasive multimedia data, generating high demands of multimedia services and applications in various areas for people to easily access and manage multimedia data. Towards such demands, multimedia big data analysis has become an emerging hot topic in both industry and academia, which ranges from basic infrastructure, management, search, and mining to security, privacy, and applications. Within the scope of this dissertation, a multimedia big data analysis framework is proposed for semantic information management and retrieval with a focus on rare event detection in videos. The proposed framework is able to explore hidden semantic feature groups in multimedia data and incorporate temporal semantics, especially for video event detection. First, a hierarchical semantic data representation is presented to alleviate the semantic gap issue, and the Hidden Coherent Feature Group (HCFG) analysis method is proposed to capture the correlation between features and separate the original feature set into semantic groups, seamlessly integrating multimedia data in multiple modalities. Next, an Importance Factor based Temporal Multiple Correspondence Analysis (i.e., IF-TMCA) approach is presented for effective event detection. Specifically, the HCFG algorithm is integrated with the Hierarchical Information Gain Analysis (HIGA) method to generate the Importance Factor (IF) for producing the initial detection results. Then, the TMCA algorithm is proposed to efficiently incorporate temporal semantics for re-ranking and improving the final performance. At last, a sampling-based ensemble learning mechanism is applied to further accommodate the imbalanced datasets. In addition to the multimedia semantic representation and class imbalance problems, lack of organization is another critical issue for multimedia big data analysis. In this framework, an affinity propagation-based summarization method is also proposed to transform the unorganized data into a better structure with clean and well-organized information. The whole framework has been thoroughly evaluated across multiple domains, such as soccer goal event detection and disaster information management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Adaptability and invisibility are hallmarks of modern terrorism, and keeping pace with its dynamic nature presents a serious challenge for societies throughout the world. Innovations in computer science have incorporated applied mathematics to develop a wide array of predictive models to support the variety of approaches to counterterrorism. Predictive models are usually designed to forecast the location of attacks. Although this may protect individual structures or locations, it does not reduce the threat—it merely changes the target. While predictive models dedicated to events or social relationships receive much attention where the mathematical and social science communities intersect, models dedicated to terrorist locations such as safe-houses (rather than their targets or training sites) are rare and possibly nonexistent. At the time of this research, there were no publically available models designed to predict locations where violent extremists are likely to reside. This research uses France as a case study to present a complex systems model that incorporates multiple quantitative, qualitative and geospatial variables that differ in terms of scale, weight, and type. Though many of these variables are recognized by specialists in security studies, there remains controversy with respect to their relative importance, degree of interaction, and interdependence. Additionally, some of the variables proposed in this research are not generally recognized as drivers, yet they warrant examination based on their potential role within a complex system. This research tested multiple regression models and determined that geographically-weighted regression analysis produced the most accurate result to accommodate non-stationary coefficient behavior, demonstrating that geographic variables are critical to understanding and predicting the phenomenon of terrorism. This dissertation presents a flexible prototypical model that can be refined and applied to other regions to inform stakeholders such as policy-makers and law enforcement in their efforts to improve national security and enhance quality-of-life.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Maintenance of transport infrastructure assets is widely advocated as the key in minimizing current and future costs of the transportation network. While effective maintenance decisions are often a result of engineering skills and practical knowledge, efficient decisions must also account for the net result over an asset's life-cycle. One essential aspect in the long term perspective of transport infrastructure maintenance is to proactively estimate maintenance needs. In dealing with immediate maintenance actions, support tools that can prioritize potential maintenance candidates are important to obtain an efficient maintenance strategy. This dissertation consists of five individual research papers presenting a microdata analysis approach to transport infrastructure maintenance. Microdata analysis is a multidisciplinary field in which large quantities of data is collected, analyzed, and interpreted to improve decision-making. Increased access to transport infrastructure data enables a deeper understanding of causal effects and a possibility to make predictions of future outcomes. The microdata analysis approach covers the complete process from data collection to actual decisions and is therefore well suited for the task of improving efficiency in transport infrastructure maintenance. Statistical modeling was the selected analysis method in this dissertation and provided solutions to the different problems presented in each of the five papers. In Paper I, a time-to-event model was used to estimate remaining road pavement lifetimes in Sweden. In Paper II, an extension of the model in Paper I assessed the impact of latent variables on road lifetimes; displaying the sections in a road network that are weaker due to e.g. subsoil conditions or undetected heavy traffic. The study in Paper III incorporated a probabilistic parametric distribution as a representation of road lifetimes into an equation for the marginal cost of road wear. Differentiated road wear marginal costs for heavy and light vehicles are an important information basis for decisions regarding vehicle miles traveled (VMT) taxation policies. In Paper IV, a distribution based clustering method was used to distinguish between road segments that are deteriorating and road segments that have a stationary road condition. Within railway networks, temporary speed restrictions are often imposed because of maintenance and must be addressed in order to keep punctuality. The study in Paper V evaluated the empirical effect on running time of speed restrictions on a Norwegian railway line using a generalized linear mixed model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Subtle structural differencescan be observed in the islets of Langer-hans region of microscopic image of pancreas cell of the rats having normal glucose tolerance and the rats having pre-diabetic(glucose intolerant)situa-tions. This paper proposes a way to automatically segment the islets of Langer-hans region fromthe histological image of rat's pancreas cell and on the basis of some morphological feature extracted from the segmented region the images are classified as normal and pre-diabetic.The experiment is done on a set of 134 images of which 56 are of normal type and the rests 78 are of pre-diabetictype. The work has two stages: primarily,segmentationof theregion of interest (roi)i.e. islets of Langerhansfrom the pancreatic cell and secondly, the extrac-tion of the morphological featuresfrom the region of interest for classification. Wavelet analysis and connected component analysis method have been used for automatic segmentationof the images. A few classifiers like OneRule, Naïve Bayes, MLP, J48 Tree, SVM etc.are used for evaluation among which MLP performed the best.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The central thesis in the article is that the venture creation process is different for innovative versus imitative ventures. This holds up; the pace of the process differs by type of venture as do, in line with theory-based hypotheses, the effects of certain human capital (HC) and social capital (SC) predictors. Importantly, and somewhat unexpectedly, the theoretically derived models using HC, SC, and certain controls are relatively successful explaining progress in the creation process for the minority of innovative ventures, but achieve very limited success for the imitative majority. This may be due to a rationalistic bias in conventional theorizing and suggests that there is need for considerable theoretical development regarding the important phenomenon of new venture creation processes. Another important result is that the building up of instrumental social capital, which we assess comprehensively and as a time variant construct, is important for making progress with both types of ventures, and increasingly, so as the process progresses. This result corroborates with stronger operationalization and more appropriate analysis method what previously published research has only been able to hint at.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Engineering assets are often complex systems. In a complex system, components often have failure interactions which lead to interactive failures. A system with interactive failures may lead to an increased failure probability. Hence, one may have to take the interactive failures into account when designing and maintaining complex engineering systems. To address this issue, Sun et al have developed an analytical model for the interactive failures. In this model, the degree of interaction between two components is represented by interactive coefficients. To use this model for failure analysis, the related interactive coefficients must be estimated. However, methods for estimating the interactive coefficients have not been reported. To fill this gap, this paper presents five methods to estimate the interactive coefficients including probabilistic method; failure data based analysis method; laboratory experimental method; failure interaction mechanism based method; and expert estimation method. Examples are given to demonstrate the applications of the proposed methods. Comparisons among these methods are also presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Information and communication technologies (particularly websites and e-mail) have the potential to deliver health behavior change programs to large numbers of adults at low cost. Controlled trials using these new media to promote physical activity have produced mixed results. User-centered development methods can assist in understanding the preferences of potential participants for website functions and content, and may lead to more effective programs. Eight focus group discussions were conducted with 40 adults after they had accessed a previously trialed physical activity website. The discussions were audio taped, transcribed and interpreted using a themed analysis method. Four key themes emerged: structure, interactivity, environmental context and content. Preferences were expressed for websites that include simple interactive features, together with information on local community activity opportunities. Particular suggestions included online community notice boards, personalized progress charts, e-mail access to expert advice and access to information on specific local physical activity facilities and services. Website physical activity interventions could usefully include personally relevant interactive and environmentally focused features and services identified through a user-centered development process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a Critical Discourse Analysis (CDA) of four policy documents currently offering ‘sets of possibilities’ for the teaching of English as an additional or second language (hereafter EAL/ESL) in senior classrooms in Queensland, Australia. The aim is to identify the ways in which each document re-presents the notion of critical literacy. Leximancer software, and Fairclough’s textually-oriented discourse analysis method (2001, 2003) are used to interrogate the relevant sections of the documents for the ways in which they re-present (sic) and construct the discourses around critical language study. This paper presents the description, interpretation and explanation of the discourses in these documents which constitute part of a larger project in which teacher interviews and classroom teaching are also investigated for the ways in which ‘the critical’ is constructed and contested in knowledge and practice.