163 resultados para latent fingermarks


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Context Cancer patients experience a broad range of physical and psychological symptoms as a result of their disease and its treatment. On average, these patients report ten unrelieved and co-occurring symptoms. Objectives To determine if subgroups of oncology outpatients receiving active treatment (n=582) could be identified based on their distinct experience with thirteen commonly occurring symptoms; to determine whether these subgroups differed on select demographic, and clinical characteristics; and to determine if these subgroups differed on quality of life (QOL) outcomes. Methods Demographic, clinical, and symptom data from one Australian and two U.S. studies were combined. Latent class analysis (LCA) was used to identify patient subgroups with distinct symptom experiences based on self-report data on symptom occurrence using the Memorial Symptom Assessment Scale (MSAS). Results Four distinct latent classes were identified (i.e., All Low (28.0%), Moderate Physical and Lower Psych (26.3%), Moderate Physical and Higher Psych (25.4%), All High (20.3%)). Age, gender, education, cancer diagnosis, and presence of metastatic disease differentiated among the latent classes. Patients in the All High class had the worst QOL scores. Conclusion Findings from this study confirm the large amount of interindividual variability in the symptom experience of oncology patients. The identification of demographic and clinical characteristics that place patients are risk for a higher symptom burden can be used to guide more aggressive and individualized symptom management interventions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Local spatio-temporal features with a Bag-of-visual words model is a popular approach used in human action recognition. Bag-of-features methods suffer from several challenges such as extracting appropriate appearance and motion features from videos, converting extracted features appropriate for classification and designing a suitable classification framework. In this paper we address the problem of efficiently representing the extracted features for classification to improve the overall performance. We introduce two generative supervised topic models, maximum entropy discrimination LDA (MedLDA) and class- specific simplex LDA (css-LDA), to encode the raw features suitable for discriminative SVM based classification. Unsupervised LDA models disconnect topic discovery from the classification task, hence yield poor results compared to the baseline Bag-of-words framework. On the other hand supervised LDA techniques learn the topic structure by considering the class labels and improve the recognition accuracy significantly. MedLDA maximizes likelihood and within class margins using max-margin techniques and yields a sparse highly discriminative topic structure; while in css-LDA separate class specific topics are learned instead of common set of topics across the entire dataset. In our representation first topics are learned and then each video is represented as a topic proportion vector, i.e. it can be comparable to a histogram of topics. Finally SVM classification is done on the learned topic proportion vector. We demonstrate the efficiency of the above two representation techniques through the experiments carried out in two popular datasets. Experimental results demonstrate significantly improved performance compared to the baseline Bag-of-features framework which uses kmeans to construct histogram of words from the feature vectors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Context: Identifying susceptibility genes for schizophrenia may be complicated by phenotypic heterogeneity, with some evidence suggesting that phenotypic heterogeneity reflects genetic heterogeneity. Objective: To evaluate the heritability and conduct genetic linkage analyses of empirically derived, clinically homogeneous schizophrenia subtypes. Design: Latent class and linkage analysis. Setting: Taiwanese field research centers. Participants: The latent class analysis included 1236 Han Chinese individuals with DSM-IV schizophrenia. These individuals were members of a large affected-sibling-pair sample of schizophrenia (606 ascertained families), original linkage analyses of which detected a maximum logarithm of odds (LOD) of 1.8 (z = 2.88) on chromosome 10q22.3. Main Outcome Measures: Multipoint exponential LOD scores by latent class assignment and parametric heterogeneity LOD scores. Results: Latent class analyses identified 4 classes, with 2 demonstrating familial aggregation. The first (LC2) described a group with severe negative symptoms, disorganization, and pronounced functional impairment, resembling “deficit schizophrenia.” The second (LC3) described a group with minimal functional impairment, mild or absent negative symptoms, and low disorganization. Using the negative/deficit subtype, we detected genome-wide significant linkage to 1q23-25 (LOD = 3.78, empiric genome-wide P = .01). This region was not detected using the DSM-IV schizophrenia diagnosis, but has been strongly implicated in schizophrenia pathogenesis by previous linkage and association studies.Variants in the 1q region may specifically increase risk for a negative/deficit schizophrenia subtype. Alternatively, these results may reflect increased familiality/heritability of the negative class, the presence of multiple 1q schizophrenia risk genes, or a pleiotropic 1q risk locus or loci, with stronger genotype-phenotype correlation with negative/deficit symptoms. Using the second familial latent class, we identified nominally significant linkage to the original 10q peak region. Conclusion: Genetic analyses of heritable, homogeneous phenotypes may improve the power of linkage and association studies of schizophrenia and thus have relevance to the design and analysis of genome-wide association studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Latent class and genetic analyses were used to identify subgroups of migraine sufferers in a community sample of 6,265 Australian twins (55% female) aged 25-36 who had completed an interview based on International Headache Society (IHS) criteria. Consistent with prevalence rates from other population-based studies, 703 (20%) female and 250 (9%) male twins satisfied the IHS criteria for migraine without aura (MO), and of these, 432 (13%) female and 166 (6%) male twins satisfied the criteria for migraine with aura (MA) as indicated by visual symptoms. Latent class analysis (LCA) of IHS symptoms identified three major symptomatic classes, representing 1) a mild form of recurrent nonmigrainous headache, 2) a moderately severe form of migraine, typically without visual aura symptoms (although 40% of individuals in this class were positive for aura), and 3) a severe form of migraine typically with visual aura symptoms (although 24% of individuals were negative for aura). Using the LCA classification, many more individuals were considered affected to some degree than when using IHS criteria (35% vs. 13%). Furthermore, genetic model fitting indicated a greater genetic contribution to migraine using the LCA classification (heritability, h(2)=0.40; 95% CI, 0.29-0.46) compared with the IHS classification (h(2)=0.36; 95% CI, 0.22-0.42). Exploratory latent class modeling, fitting up to 10 classes, did not identify classes corresponding to either the IHS MO or MA classification. Our data indicate the existence of a continuum of severity, with MA more severe but not etiologically distinct from MO. In searching for predisposing genes, we should therefore expect to find some genes that may underlie all major recurrent headache subtypes, with modifying genetic or environmental factors that may lead to differential expression of the liability for migraine.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For zygosity diagnosis in the absence of genotypic data, or in the recruitment phase of a twin study where only single twins from same-sex pairs are being screened, or to provide a test for sample duplication leading to the false identification of a dizygotic pair as monozygotic, the appropriate analysis of respondents' answers to questions about zygosity is critical. Using data from a young adult Australian twin cohort (N = 2094 complete pairs and 519 singleton twins from same-sex pairs with complete responses to all zygosity items), we show that application of latent class analysis (LCA), fitting a 2-class model, yields results that show good concordance with traditional methods of zygosity diagnosis, but with certain important advantages. These include the ability, in many cases, to assign zygosity with specified probability on the basis of responses of a single informant (advantageous when one zygosity type is being oversampled); and the ability to quantify the probability of misassignment of zygosity, allowing prioritization of cases for genotyping as well as identification of cases of probable laboratory error. Out of 242 twins (from 121 like-sex pairs) where genotypic data were available for zygosity confirmation, only a single case was identified of incorrect zygosity assignment by the latent class algorithm. Zygosity assignment for that single case was identified by the LCA as uncertain (probability of being a monozygotic twin only 76%), and the co-twin's responses clearly identified the pair as dizygotic (probability of being dizygotic 100%). In the absence of genotypic data, or as a safeguard against sample duplication, application of LCA for zygosity assignment or confirmation is strongly recommended.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The prevalence of latent autoimmune diabetes in adults (LADA) in patients diagnosed with type 2 diabetes mellitus (T2DM) ranges from 7 to 10% (1). They present at a younger age and have a lower BMI but poorer glycemic control, which may increase the risk of complications (2). However, a recent analysis of the Collaborative Atorvastatin Diabetes Study (CARDS) has demonstrated no difference in macrovascular or microvascular events between patients with LADA and T2DM, but neuropathy was not assessed (3). Previous studies quantifying neuropathy in patients with LADA are limited. In this study, we aimed to accurately quantify neuropathy in subjects with LADA compared with matched patients with T2DM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, we introduce the general statistical analysis approach known as latent class analysis and discuss some of the issues associated with this type of analysis in practice. Two recent examples from the respiratory health literature are used to highlight the types of research questions that have been addressed using this approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The current state of the practice in Blackspot Identification (BSI) utilizes safety performance functions based on total crash counts to identify transport system sites with potentially high crash risk. This paper postulates that total crash count variation over a transport network is a result of multiple distinct crash generating processes including geometric characteristics of the road, spatial features of the surrounding environment, and driver behaviour factors. However, these multiple sources are ignored in current modelling methodologies in both trying to explain or predict crash frequencies across sites. Instead, current practice employs models that imply that a single underlying crash generating process exists. The model mis-specification may lead to correlating crashes with the incorrect sources of contributing factors (e.g. concluding a crash is predominately caused by a geometric feature when it is a behavioural issue), which may ultimately lead to inefficient use of public funds and misidentification of true blackspots. This study aims to propose a latent class model consistent with a multiple crash process theory, and to investigate the influence this model has on correctly identifying crash blackspots. We first present the theoretical and corresponding methodological approach in which a Bayesian Latent Class (BLC) model is estimated assuming that crashes arise from two distinct risk generating processes including engineering and unobserved spatial factors. The Bayesian model is used to incorporate prior information about the contribution of each underlying process to the total crash count. The methodology is applied to the state-controlled roads in Queensland, Australia and the results are compared to an Empirical Bayesian Negative Binomial (EB-NB) model. A comparison of goodness of fit measures illustrates significantly improved performance of the proposed model compared to the NB model. The detection of blackspots was also improved when compared to the EB-NB model. In addition, modelling crashes as the result of two fundamentally separate underlying processes reveals more detailed information about unobserved crash causes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For the most part, the literature base for Integrated Marketing Communication (IMC) has developed from an applied or tactical level rather than from an intellectual or theoretical one. Since industry, practitioner and even academic studies have provided little insight into what IMC is and how it operates, our approach has been to investigate that other IMC community, that is, the academic or instructional group responsible for disseminating IMC knowledge. We proposed that the people providing course instruction and directing research activities have some basis for how they organize, consider and therefore instruct in the area of IMC. A syllabi analysis of 87 IMC units in six countries investigated the content of the unit, its delivery both physically and conceptually, and defined the audience of the unit. The study failed to discover any type of latent theoretical foundation that might be used as a base for understanding IMC. The students who are being prepared to extend, expand and enhance IMC concepts do not appear to be well-served by the curriculum we found in our research. The study concludes with a model for further IMC curriculum development.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As an understanding of users' tacit knowledge and latent needs embedded in user experience has played a critical role in product development, users’ direct involvement in design has become a necessary part of the design process. Various ways of accessing users' tacit knowledge and latent needs have been explored in the field of user-centred design, participatory design, and design for experiencing. User-designer collaboration has been used unconsciously by traditional designers to facilitate the transfer of users' tacit knowledge and to elicit new knowledge. However, what makes user-designer collaboration an effective strategy has rarely been reported on or explored. Therefore, interaction patterns between the users and the designers in three industry-supported user involvement cases were studied. In order to develop a coding system, collaboration was defined as a set of coordinated and joint problem solving activities, measured by the elicitation of new knowledge from collaboration. The analysis of interaction patterns in the user involvement cases revealed that allowing users to challenge or modify their contextual experiences facilitates the transfer of knowledge and new knowledge generation. It was concluded that users can be more effectively integrated into the product development process by employing collaboration strategies to intensify the depth of user involvement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Bryan v Maloney, the High Court extended a builder’s duty of care to encompass a liability in negligence for the pure economic loss sustained by a subsequent purchaser of a residential dwelling as a result of latent defects in the building’s construction. Recently, in Woolcock Street Investments Pty Ltd v CDG Pty Ltd, the Court refused to extend this liability to defects in commercial premises. The decision therefore provides an opportunity to re-examine the rationale and policy behind current jurisprudence governing builders’ liability for pure economic loss. In doing so, this article considers the principles relevant to the determination of a duty of care generally and whether the differences between purchasers of residential and commercial properties are as great as the case law suggests

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective The review addresses two distinct sets of issues: 1. specific functionality, interface, and calculation problems that presumably can be fixed or improved; and 2. the more fundamental question of whether the system is close to being ready for ‘commercial prime time’ in the North American market. Findings Many of our comments relate to the first set of issues, especially sections B and C. Sections D and E deal with the second set. Overall, we feel that LCADesign represents a very impressive step forward in the ongoing quest to link CAD with LCA tools and, more importantly, to link the world of architectural practice and that of environmental research. From that perspective, it deserves continued financial support as a research project. However, if the decision is whether or not to continue the development program from a purely commercial perspective, we are less bullish. In terms of the North American market, there are no regulatory or other drivers to press design teams to use a tool of this nature. There is certainly interest in this area, but the tools must be very easy to use with little or no training. Understanding the results is as important in this regard as knowing how to apply the tool. Our comments are fairly negative when it comes to that aspect. Our opinion might change to some degree when the ‘fixes’ are made and the functionality improved. However, as discussed in more detail in the following sections, we feel that the multi-step process — CAD to IFC to LCADesign — could pose a serious problem in terms of market acceptance. The CAD to IFC part is impossible for us to judge with the information provided, and we can’t even begin to answer the question about the ease of using the software to import designs, but it appears cumbersome from what we do know. There does appear to be a developing North American market for 3D CAD, with a recent survey indicating that about 50% of the firms use some form of 3D modeling for about 75% of their projects. However, this does not mean that full 3D CAD is always being used. Our information suggests that AutoDesk accounts for about 75 to 80% of the 3D CAD market, and they are very cautious about any links that do not serve a latent demand. Finally, other system that link CAD to energy simulation are using XML data transfer protocols rather than IFC files, and it is our understanding that the market served by AutoDesk tends in that direction right now. This is a subject that is outside our area of expertise, so please take these comments as suggestions for more intensive market research rather than as definitive findings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite changes in surgical techniques, radiotherapy targeting and the apparent earlier detection of cancers, secondary lymphoedema is still a significant problem for about 20–30% of those who receive treatment for cancer, although the incidence and prevalence does seem to be falling. The figures above generally relate to detection of an enlarged limb or other area, but it seems that about 60% of all patients also suffer other problems with how the limb feels, what can or cannot be done with it and a range of social or psychological issues. Often these ‘subjective’ changes occur before the objective ones, such as a change in arm volume or circumference. For most of those treated for cancer lymphoedema does not develop immediately, and, while about 60–70% develop it in the first few years, some do not develop lymphoedema for up to 15 or 20 years. Those who will develop clinically manifest lymphoedema in the future are, for some time, in a latent or hidden phase of lymphoedema. There also seems to be some risk factors which are indicators for a higher likelihood of lymphoedema post treatment, including oedema at the surgical site, arm dominance, age, skin conditions, and body mass index (BMI).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Vehicle detectors have been installed at approximately every 300 meters on each lane on Tokyo metropolitan expressway. Various traffic data such as traffic volume, average speed and time occupancy are collected by vehicle detectors. We can understand traffic characteristics of every point by comparing traffic data collected at consecutive points. In this study, we focused on average speed, analyzed road potential by operating speed during free-flow conditions, and identified latent bottlenecks. Furthermore, we analyzed effects for road potential by the rainfall level and day of the week. It’s expected that this method of analysis will be utilized for installation of ITS such as drive assist, estimation of parameters for traffic simulation and feedback to road design as congestion measures.