997 resultados para Problem Resolution


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper shows how instructors can use the problem‐based learning method to introduce producer theory and market structure in intermediate microeconomics courses. The paper proposes a framework where different decision problems are presented to students, who are asked to imagine that they are the managers of a firm who need to solve a problem in a particular business setting. In this setting, the instructors’ role isto provide both guidance to facilitate student learning and content knowledge on a just‐in‐time basis

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present and apply a semisupervised support vector machine based on cluster kernels for the problem of very high resolution image classification. In the proposed setting, a base kernel working with labeled samples only is deformed by a likelihood kernel encoding similarities between unlabeled examples. The resulting kernel is used to train a standard support vector machine (SVM) classifier. Experiments carried out on very high resolution (VHR) multispectral and hyperspectral images using very few labeled examples show the relevancy of the method in the context of urban image classification. Its simplicity and the small number of parameters involved make it versatile and workable by unexperimented users.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background The 'database search problem', that is, the strengthening of a case - in terms of probative value - against an individual who is found as a result of a database search, has been approached during the last two decades with substantial mathematical analyses, accompanied by lively debate and centrally opposing conclusions. This represents a challenging obstacle in teaching but also hinders a balanced and coherent discussion of the topic within the wider scientific and legal community. This paper revisits and tracks the associated mathematical analyses in terms of Bayesian networks. Their derivation and discussion for capturing probabilistic arguments that explain the database search problem are outlined in detail. The resulting Bayesian networks offer a distinct view on the main debated issues, along with further clarity. Methods As a general framework for representing and analyzing formal arguments in probabilistic reasoning about uncertain target propositions (that is, whether or not a given individual is the source of a crime stain), this paper relies on graphical probability models, in particular, Bayesian networks. This graphical probability modeling approach is used to capture, within a single model, a series of key variables, such as the number of individuals in a database, the size of the population of potential crime stain sources, and the rarity of the corresponding analytical characteristics in a relevant population. Results This paper demonstrates the feasibility of deriving Bayesian network structures for analyzing, representing, and tracking the database search problem. The output of the proposed models can be shown to agree with existing but exclusively formulaic approaches. Conclusions The proposed Bayesian networks allow one to capture and analyze the currently most well-supported but reputedly counter-intuitive and difficult solution to the database search problem in a way that goes beyond the traditional, purely formulaic expressions. The method's graphical environment, along with its computational and probabilistic architectures, represents a rich package that offers analysts and discussants with additional modes of interaction, concise representation, and coherent communication.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The statistical analysis of literary style is the part of stylometry that compares measurable characteristicsin a text that are rarely controlled by the author, with those in other texts. When thegoal is to settle authorship questions, these characteristics should relate to the author’s style andnot to the genre, epoch or editor, and they should be such that their variation between authors islarger than the variation within comparable texts from the same author.For an overview of the literature on stylometry and some of the techniques involved, see for exampleMosteller and Wallace (1964, 82), Herdan (1964), Morton (1978), Holmes (1985), Oakes (1998) orLebart, Salem and Berry (1998).Tirant lo Blanc, a chivalry book, is the main work in catalan literature and it was hailed to be“the best book of its kind in the world” by Cervantes in Don Quixote. Considered by writterslike Vargas Llosa or Damaso Alonso to be the first modern novel in Europe, it has been translatedseveral times into Spanish, Italian and French, with modern English translations by Rosenthal(1996) and La Fontaine (1993). The main body of this book was written between 1460 and 1465,but it was not printed until 1490.There is an intense and long lasting debate around its authorship sprouting from its first edition,where its introduction states that the whole book is the work of Martorell (1413?-1468), while atthe end it is stated that the last one fourth of the book is by Galba (?-1490), after the death ofMartorell. Some of the authors that support the theory of single authorship are Riquer (1990),Chiner (1993) and Badia (1993), while some of those supporting the double authorship are Riquer(1947), Coromines (1956) and Ferrando (1995). For an overview of this debate, see Riquer (1990).Neither of the two candidate authors left any text comparable to the one under study, and thereforediscriminant analysis can not be used to help classify chapters by author. By using sample textsencompassing about ten percent of the book, and looking at word length and at the use of 44conjunctions, prepositions and articles, Ginebra and Cabos (1998) detect heterogeneities that mightindicate the existence of two authors. By analyzing the diversity of the vocabulary, Riba andGinebra (2000) estimates that stylistic boundary to be near chapter 383.Following the lead of the extensive literature, this paper looks into word length, the use of the mostfrequent words and into the use of vowels in each chapter of the book. Given that the featuresselected are categorical, that leads to three contingency tables of ordered rows and therefore tothree sequences of multinomial observations.Section 2 explores these sequences graphically, observing a clear shift in their distribution. Section 3describes the problem of the estimation of a suden change-point in those sequences, in the followingsections we propose various ways to estimate change-points in multinomial sequences; the methodin section 4 involves fitting models for polytomous data, the one in Section 5 fits gamma modelsonto the sequence of Chi-square distances between each row profiles and the average profile, theone in Section 6 fits models onto the sequence of values taken by the first component of thecorrespondence analysis as well as onto sequences of other summary measures like the averageword length. In Section 7 we fit models onto the marginal binomial sequences to identify thefeatures that distinguish the chapters before and after that boundary. Most methods rely heavilyon the use of generalized linear models

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Epipolar geometry is a key point in computer vision and the fundamental matrix estimation is the only way to compute it. This article surveys several methods of fundamental matrix estimation which have been classified into linear methods, iterative methods and robust methods. All of these methods have been programmed and their accuracy analysed using real images. A summary, accompanied with experimental results, is given

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND. The phenomenon of misdiagnosing tuberculosis (TB) by laboratory cross-contamination when culturing Mycobacterium tuberculosis (MTB) has been widely reported and it has an obvious clinical, therapeutic and social impact. The final confirmation of a cross-contamination event requires the molecular identification of the same MTB strain cultured from both the potential source of the contamination and from the false-positive candidate. The molecular tool usually applied in this context is IS6110-RFLP which takes a long time to provide an answer, usually longer than is acceptable for microbiologists and clinicians to make decisions. Our purpose in this study is to evaluate a novel PCR-based method, MIRU-VNTR as an alternative to assure a rapid and optimized analysis of cross-contamination alerts. RESULTS. MIRU-VNTR was prospectively compared with IS6110-RFLP for clarifying 19 alerts of false positivity from other laboratories. MIRU-VNTR highly correlated with IS6110-RFLP, reduced the response time by 27 days and clarified six alerts unresolved by RFLP. Additionally, MIRU-VNTR revealed complex situations such as contamination events involving polyclonal isolates and a false-positive case due to the simultaneous cross-contamination from two independent sources. CONCLUSION. Unlike standard RFLP-based genotyping, MIRU-VNTR i) could help reduce the impact of a false positive diagnosis of TB, ii) increased the number of events that could be solved and iii) revealed the complexity of some cross-contamination events that could not be dissected by IS6110-RFLP.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to SNR constraints, current "bright-blood" 3D coronary MRA approaches still suffer from limited spatial resolution when compared to conventional x-ray coronary angiography. Recent 2D fast spin-echo black-blood techniques maximize signal for coronary MRA at no loss in image spatial resolution. This suggests that the extension of black-blood coronary MRA with a 3D imaging technique would allow for a further signal increase, which may be traded for an improved spatial resolution. Therefore, a dual-inversion 3D fast spin-echo imaging sequence and real-time navigator technology were combined for high-resolution free-breathing black-blood coronary MRA. In-plane image resolution below 400 microm was obtained. Magn Reson Med 45:206-211, 2001.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We created a high-throughput modality of photoactivated localization microscopy (PALM) that enables automated 3D PALM imaging of hundreds of synchronized bacteria during all stages of the cell cycle. We used high-throughput PALM to investigate the nanoscale organization of the bacterial cell division protein FtsZ in live Caulobacter crescentus. We observed that FtsZ predominantly localizes as a patchy midcell band, and only rarely as a continuous ring, supporting a model of "Z-ring" organization whereby FtsZ protofilaments are randomly distributed within the band and interact only weakly. We found evidence for a previously unidentified period of rapid ring contraction in the final stages of the cell cycle. We also found that DNA damage resulted in production of high-density continuous Z-rings, which may obstruct cytokinesis. Our results provide a detailed quantitative picture of in vivo Z-ring organization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantitative polymerase chain reaction-high-resolution melting (qPCR-HRM) analysis was used to screen for mutations related to drug resistance in Mycobacterium tuberculosis. We detected the C526T and C531T mutations in the rifampicin resistance-determining region (RRDR) of the rpoB gene with qPCR-HRM using plasmid-based controls. A segment of the RRDR region from M. tuberculosis H37Rv and from strains carrying C531T or C526T mutations in the rpoB were cloned into pGEM-T vector and these vectors were used as controls in the qPCR-HRM analysis of 54 M. tuberculosis strains. The results were confirmed by DNA sequencing and showed that recombinant plasmids can replace genomic DNA as controls in the qPCR-HRM assay. Plasmids can be handled outside of biosafety level 3 facilities, reducing the risk of contamination and the cost of the assay. Plasmids have a high stability, are normally maintained in Escherichia coli and can be extracted in large amounts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVES: To assess the extent to which stage at diagnosis and adherence to treatment guidelines may explain the persistent differences in colorectal cancer survival between the USA and Europe. DESIGN: A high-resolution study using detailed clinical data on Dukes' stage, diagnostic procedures, treatment and follow-up, collected directly from medical records by trained abstractors under a single protocol, with standardised quality control and central statistical analysis. SETTING AND PARTICIPANTS: 21 population-based registries in seven US states and nine European countries provided data for random samples comprising 12 523 adults (15-99 years) diagnosed with colorectal cancer during 1996-1998. OUTCOME MEASURES: Logistic regression models were used to compare adherence to 'standard care' in the USA and Europe. Net survival and excess risk of death were estimated with flexible parametric models. RESULTS: The proportion of Dukes' A and B tumours was similar in the USA and Europe, while that of Dukes' C was more frequent in the USA (38% vs 21%) and of Dukes' D more frequent in Europe (22% vs 10%). Resection with curative intent was more frequent in the USA (85% vs 75%). Elderly patients (75-99 years) were 70-90% less likely to receive radiotherapy and chemotherapy. Age-standardised 5-year net survival was similar in the USA (58%) and Northern and Western Europe (54-56%) and lowest in Eastern Europe (42%). The mean excess hazard up to 5 years after diagnosis was highest in Eastern Europe, especially among elderly patients and those with Dukes' D tumours. CONCLUSIONS: The wide differences in colorectal cancer survival between Europe and the USA in the late 1990s are probably attributable to earlier stage and more extensive use of surgery and adjuvant treatment in the USA. Elderly patients with colorectal cancer received surgery, chemotherapy or radiotherapy less often than younger patients, despite evidence that they could also have benefited.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We formulate a necessary and sufficient condition for polynomials to be dense in a space of continuous functions on the real line, with respect to Bernstein's weighted uniform norm. Equivalently, for a positive finite measure [lletra "mu" minúscula de l'alfabet grec] on the real line we give a criterion for density of polynomials in Lp[lletra "mu" minúscula de l'alfabet grec entre parèntesis].