15 resultados para computer assisted sperm analysis
em CentAUR: Central Archive University of Reading - UK
Resumo:
Population size estimation with discrete or nonparametric mixture models is considered, and reliable ways of construction of the nonparametric mixture model estimator are reviewed and set into perspective. Construction of the maximum likelihood estimator of the mixing distribution is done for any number of components up to the global nonparametric maximum likelihood bound using the EM algorithm. In addition, the estimators of Chao and Zelterman are considered with some generalisations of Zelterman’s estimator. All computations are done with CAMCR, a special software developed for population size estimation with mixture models. Several examples and data sets are discussed and the estimators illustrated. Problems using the mixture model-based estimators are highlighted.
Resumo:
Population size estimation with discrete or nonparametric mixture models is considered, and reliable ways of construction of the nonparametric mixture model estimator are reviewed and set into perspective. Construction of the maximum likelihood estimator of the mixing distribution is done for any number of components up to the global nonparametric maximum likelihood bound using the EM algorithm. In addition, the estimators of Chao and Zelterman are considered with some generalisations of Zelterman’s estimator. All computations are done with CAMCR, a special software developed for population size estimation with mixture models. Several examples and data sets are discussed and the estimators illustrated. Problems using the mixture model-based estimators are highlighted.
Resumo:
Modern organisms are adapted to a wide variety of habitats and lifestyles. The processes of evolution have led to complex, interdependent, well-designed mechanisms of todays world and this research challenge is to transpose these innovative solutions to resolve problems in the context of architectural design practice, e.g., to relate design by nature with design by human. In a design by human environment, design synthesis can be performed with the use of rapid prototyping techniques that will enable to transform almost instantaneously any 2D design representation into a physical three-dimensional model, through a rapid prototyping printer machine. Rapid prototyping processes add layers of material one on top of another until a complete model is built and an analogy can be established with design by nature where the natural lay down of earth layers shapes the earth surface, a natural process occurring repeatedly over long periods of time. Concurrence in design will particularly benefit from rapid prototyping techniques, as the prime purpose of physical prototyping is to promptly assist iterative design, enabling design participants to work with a three-dimensional hardcopy and use it for the validation of their design-ideas. Concurrent design is a systematic approach aiming to facilitate the simultaneous involvment and commitment of all participants in the building design process, enabling both an effective reduction of time and costs at the design phase and a quality improvement of the design product. This paper presents the results of an exploratory survey investigating both how computer-aided design systems help designers to fully define the shape of their design-ideas and the extent of the application of rapid prototyping technologies coupled with Internet facilities by design practice. The findings suggest that design practitioners recognize that these technologies can greatly enhance concurrence in design, though acknowledging a lack of knowledge in relation to the issue of rapid prototyping.
Resumo:
Objective: This paper presents a detailed study of fractal-based methods for texture characterization of mammographic mass lesions and architectural distortion. The purpose of this study is to explore the use of fractal and lacunarity analysis for the characterization and classification of both tumor lesions and normal breast parenchyma in mammography. Materials and methods: We conducted comparative evaluations of five popular fractal dimension estimation methods for the characterization of the texture of mass lesions and architectural distortion. We applied the concept of lacunarity to the description of the spatial distribution of the pixel intensities in mammographic images. These methods were tested with a set of 57 breast masses and 60 normal breast parenchyma (dataset1), and with another set of 19 architectural distortions and 41 normal breast parenchyma (dataset2). Support vector machines (SVM) were used as a pattern classification method for tumor classification. Results: Experimental results showed that the fractal dimension of region of interest (ROIs) depicting mass lesions and architectural distortion was statistically significantly lower than that of normal breast parenchyma for all five methods. Receiver operating characteristic (ROC) analysis showed that fractional Brownian motion (FBM) method generated the highest area under ROC curve (A z = 0.839 for dataset1, 0.828 for dataset2, respectively) among five methods for both datasets. Lacunarity analysis showed that the ROIs depicting mass lesions and architectural distortion had higher lacunarities than those of ROIs depicting normal breast parenchyma. The combination of FBM fractal dimension and lacunarity yielded the highest A z value (0.903 and 0.875, respectively) than those based on single feature alone for both given datasets. The application of the SVM improved the performance of the fractal-based features in differentiating tumor lesions from normal breast parenchyma by generating higher A z value. Conclusion: FBM texture model is the most appropriate model for characterizing mammographic images due to self-affinity assumption of the method being a better approximation. Lacunarity is an effective counterpart measure of the fractal dimension in texture feature extraction in mammographic images. The classification results obtained in this work suggest that the SVM is an effective method with great potential for classification in mammographic image analysis.
Resumo:
We present an intuitive geometric approach for analysing the structure and fragility of T1-weighted structural MRI scans of human brains. Apart from computing characteristics like the surface area and volume of regions of the brain that consist of highly active voxels, we also employ Network Theory in order to test how close these regions are to breaking apart. This analysis is used in an attempt to automatically classify subjects into three categories: Alzheimer’s disease, mild cognitive impairment and healthy controls, for the CADDementia Challenge.
Resumo:
The recent increase in short messaging system (SMS) text messaging, often using abbreviated, non-conventional ‘textisms’ (e.g. ‘2nite’), in school-aged children has raised fears of negative consequences of such technology for literacy. The current research used a paradigm developed by Dixon and Kaminska, who showed that exposure to phonetically plausible misspellings (e.g. ‘recieve’) negatively affected subsequent spelling performance, though this was true only with adults, not children. The current research extends this work to directly investigate the effects of exposure to textisms, misspellings and correctly spelledwords on adults’ spelling. Spelling of a set of key words was assessed both before and after an exposure phase where participants read the same key words, presented either as textisms (e.g. ‘2nite’), correctly spelled (e.g. ‘tonight’) or misspelled (e.g. 'tonite’)words. Analysis showed that scores decreased from pre- to post-test following exposure to misspellings, whereas performance improved following exposure to correctly spelled words and, interestingly, to textisms. Data suggest that exposure to textisms, unlike misspellings, had a positive effect on adults’ spelling. These findings are interpreted in light of other recent research suggesting a positive relationship between texting and some literacy measures in school-aged children.
Resumo:
There are few other areas in family law where incongruence between the legal and social positions is as evident as that concerning parenthood. Recent cases involving lesbian couples and known sperm donors serve to highlight the increasing tension between the respective roles of biology, intention and functional parenting in the attribution of legal parental status. As both legislative and case-law developments have shown, intention is central in some circumstances, but not in others. The main claim of this paper is that this ad hoc approach leads to incoherent and unsatisfactory law: instead of striving to identify a status, what we are really looking to do is to identify the people who assume responsibility for a child. Drawing upon recent case-law, this paper explores how a conceptual reform of the law could result in a principled framework which would place formally recognised intention at the heart of parental status in order to reconnect legal duty with social reality for as many children and parents as possible. Moreover, it would ensure that parental status would not be dictated by the mode of conception of the child (natural or assisted). The analysis identifies the objectives of reform before proposing a new model which, while recognising the social importance of the biological parentage link, would reserve legal status for functional parenthood.
Resumo:
For fifty years, computer chess has pursued an original goal of Artificial Intelligence, to produce a chess-engine to compete at the highest level. The goal has arguably been achieved, but that success has made it harder to answer questions about the relative playing strengths of man and machine. The proposal here is to approach such questions in a counter-intuitive way, handicapping or stopping-down chess engines so that they play less well. The intrinsic lack of man-machine games may be side-stepped by analysing existing games to place computer engines as accurately as possible on the FIDE ELO scale of human play. Move-sequences may also be assessed for likelihood if computer-assisted cheating is suspected.
Resumo:
BACKGROUND & AIMS: The mechanisms underlying abdominal pain perception in irritable bowel syndrome (IBS) are poorly understood. Intestinal mast cell infiltration may perturb nerve function leading to symptom perception. We assessed colonic mast cell infiltration, mediator release, and spatial interactions with mucosal innervation and their correlation with abdominal pain in IBS patients. METHODS: IBS patients were diagnosed according to Rome II criteria and abdominal pain quantified according to a validated questionnaire. Colonic mucosal mast cells were identified immunohistochemically and quantified with a computer-assisted counting method. Mast cell tryptase and histamine release were analyzed immunoenzymatically. Intestinal nerve to mast cell distance was assessed with electron microscopy. RESULTS: Thirty-four out of 44 IBS patients (77%) showed an increased area of mucosa occupied by mast cells as compared with controls (9.2% +/- 2.5% vs. 3.3 +/- 0.8%, respectively; P < 0.001). There was a 150% increase in the number of degranulating mast cells (4.76 +/- 3.18/field vs. 2.42 +/- 2.26/field, respectively; P = 0.026). Mucosal content of tryptase was increased in IBS and mast cells spontaneously released more tryptase (3.22 +/- 3.48 pmol/min/mg vs. 0.87 +/- 0.65 pmol/min/mg, respectively; P = 0.015) and histamine (339.7 +/- 59.0 ng/g vs. 169.3 +/- 130.6 ng/g, respectively; P = 0.015). Mast cells located within 5 microm of nerve fibers were 7.14 +/- 3.87/field vs. 2.27 +/- 1.63/field in IBS vs. controls (P < 0.001). Only mast cells in close proximity to nerves were significantly correlated with severity and frequency of abdominal pain/discomfort (P < 0.001 and P = 0.003, respectively). CONCLUSIONS: Colonic mast cell infiltration and mediator release in proximity to mucosal innervation may contribute to abdominal pain perception in IBS patients.
Resumo:
Monitoring nutritional intake is an important aspect of the care of older people, particularly for those at risk of malnutrition. Current practice for monitoring food intake relies on hand written food charts that have several inadequacies. We describe the design and validation of a tool for computer-assisted visual assessment of patient food and nutrient intake. To estimate food consumption, the application compares the pixels the user rubbed out against predefined graphical masks. Weight of food consumed is calculated as a percentage of pixels rubbed out against pixels in the mask. Results suggest that the application may be a useful tool for the conservative assessment of nutritional intake in hospitals.
Resumo:
Recent studies showed that features extracted from brain MRIs can well discriminate Alzheimer’s disease from Mild Cognitive Impairment. This study provides an algorithm that sequentially applies advanced feature selection methods for findings the best subset of features in terms of binary classification accuracy. The classifiers that provided the highest accuracies, have been then used for solving a multi-class problem by the one-versus-one strategy. Although several approaches based on Regions of Interest (ROIs) extraction exist, the prediction power of features has not yet investigated by comparing filter and wrapper techniques. The findings of this work suggest that (i) the IntraCranial Volume (ICV) normalization can lead to overfitting and worst the accuracy prediction of test set and (ii) the combined use of a Random Forest-based filter with a Support Vector Machines-based wrapper, improves accuracy of binary classification.
Resumo:
This paper explores the linguistic practice of digital code plays in an online discussion forum, used by the community of English-speaking Germans living in Britain. By adopting a qualitative approach of Computer-Mediated Discourse Analysis, the article examines the ways in which these bilinguals deploy linguistic and other semiotic resources on the forum to co-construct humorous code plays. These performances occur in the context of negotiating language norms and are based on conscious manipulations of both codes, English and German. They involve play with codes at three levels: play with forms, meanings, and frames. Although, at first sight, such alternations appear to be used mainly for a comic effect, there is more to this than just humour. By mixing both codes at all levels, the participants deliberately produce aberrant German ‘polluted’ with English and, in so doing, dismantle the ideology of language purity upheld by the purist movement. The deliberate character of this type of code alternation demonstrates heightened metalinguistic awareness as well as creativity and criticality. By exploring the practice of digital code plays, the current study contributes to the growing body of research on networked multilingualism as well as to practices associated with translanguaging, poly- and metrolingualism.
Resumo:
The social cost of food scares has been the object of substantial applied research worldwide. In Italy, meat and dairy products are often the vectors of food-borne pathogens, and this is well known by the public. Most cases of food contamination and poisoning find their causes in the way food is handled after, rather than before purchase. However, a large fraction is still caused by mishandling at the industrial stage. With this in mind, we set out to estimate Italian households’ willingness to pay (WTP) for a reduction in the risk of meat and dairy food contamination using contingent valuation. The survey design incorporated features specifically conceived to overcome difficulties faced in previous survey research, especially with respect to individualized food expenditures and risk communication. In order to achieve this objective a CAPI (computer-assisted personal interview) survey was devised to tackle two major issues which emerged in previous contingent valuation studies. The first issue is connected to the way of communicating risk to consumers in order to allow them to make optimal choices and the second one to the results deriving from these studies. In fact, estimates from contingent valuation regarding food safety are given just for single products and so marketers may find it hard to extrapolate them to the aggregate. Our results show that in Italy there are segments of consumers who would benefit from higher standards of food safety for farm animal products.