23 resultados para Computer Assisted Design
em CentAUR: Central Archive University of Reading - UK
Resumo:
Modern organisms are adapted to a wide variety of habitats and lifestyles. The processes of evolution have led to complex, interdependent, well-designed mechanisms of todays world and this research challenge is to transpose these innovative solutions to resolve problems in the context of architectural design practice, e.g., to relate design by nature with design by human. In a design by human environment, design synthesis can be performed with the use of rapid prototyping techniques that will enable to transform almost instantaneously any 2D design representation into a physical three-dimensional model, through a rapid prototyping printer machine. Rapid prototyping processes add layers of material one on top of another until a complete model is built and an analogy can be established with design by nature where the natural lay down of earth layers shapes the earth surface, a natural process occurring repeatedly over long periods of time. Concurrence in design will particularly benefit from rapid prototyping techniques, as the prime purpose of physical prototyping is to promptly assist iterative design, enabling design participants to work with a three-dimensional hardcopy and use it for the validation of their design-ideas. Concurrent design is a systematic approach aiming to facilitate the simultaneous involvment and commitment of all participants in the building design process, enabling both an effective reduction of time and costs at the design phase and a quality improvement of the design product. This paper presents the results of an exploratory survey investigating both how computer-aided design systems help designers to fully define the shape of their design-ideas and the extent of the application of rapid prototyping technologies coupled with Internet facilities by design practice. The findings suggest that design practitioners recognize that these technologies can greatly enhance concurrence in design, though acknowledging a lack of knowledge in relation to the issue of rapid prototyping.
Resumo:
Population size estimation with discrete or nonparametric mixture models is considered, and reliable ways of construction of the nonparametric mixture model estimator are reviewed and set into perspective. Construction of the maximum likelihood estimator of the mixing distribution is done for any number of components up to the global nonparametric maximum likelihood bound using the EM algorithm. In addition, the estimators of Chao and Zelterman are considered with some generalisations of Zelterman’s estimator. All computations are done with CAMCR, a special software developed for population size estimation with mixture models. Several examples and data sets are discussed and the estimators illustrated. Problems using the mixture model-based estimators are highlighted.
Resumo:
Population size estimation with discrete or nonparametric mixture models is considered, and reliable ways of construction of the nonparametric mixture model estimator are reviewed and set into perspective. Construction of the maximum likelihood estimator of the mixing distribution is done for any number of components up to the global nonparametric maximum likelihood bound using the EM algorithm. In addition, the estimators of Chao and Zelterman are considered with some generalisations of Zelterman’s estimator. All computations are done with CAMCR, a special software developed for population size estimation with mixture models. Several examples and data sets are discussed and the estimators illustrated. Problems using the mixture model-based estimators are highlighted.
Resumo:
A vision system for recognizing rigid and articulated three-dimensional objects in two-dimensional images is described. Geometrical models are extracted from a commercial computer aided design package. The models are then augmented with appearance and functional information which improves the system's hypothesis generation, hypothesis verification, and pose refinement. Significant advantages over existing CAD-based vision systems, which utilize only information available in the CAD system, are realized. Examples show the system recognizing, locating, and tracking a variety of objects in a robot work-cell and in natural scenes.
Resumo:
Virtual reality has the potential to improve visualisation of building design and construction, but its implementation in the industry has yet to reach maturity. Present day translation of building data to virtual reality is often unidirectional and unsatisfactory. Three different approaches to the creation of models are identified and described in this paper. Consideration is given to the potential of both advances in computer-aided design and the emerging standards for data exchange to facilitate an integrated use of virtual reality. Commonalities and differences between computer-aided design and virtual reality packages are reviewed, and trials of current system, are described. The trials have been conducted to explore the technical issues related to the integrated use of CAD and virtual environments within the house building sector of the construction industry and to investigate the practical use of the new technology.
Resumo:
Monitoring nutritional intake is an important aspect of the care of older people, particularly for those at risk of malnutrition. Current practice for monitoring food intake relies on hand written food charts that have several inadequacies. We describe the design and validation of a tool for computer-assisted visual assessment of patient food and nutrient intake. To estimate food consumption, the application compares the pixels the user rubbed out against predefined graphical masks. Weight of food consumed is calculated as a percentage of pixels rubbed out against pixels in the mask. Results suggest that the application may be a useful tool for the conservative assessment of nutritional intake in hospitals.
Resumo:
We present an intuitive geometric approach for analysing the structure and fragility of T1-weighted structural MRI scans of human brains. Apart from computing characteristics like the surface area and volume of regions of the brain that consist of highly active voxels, we also employ Network Theory in order to test how close these regions are to breaking apart. This analysis is used in an attempt to automatically classify subjects into three categories: Alzheimer’s disease, mild cognitive impairment and healthy controls, for the CADDementia Challenge.
Resumo:
The social cost of food scares has been the object of substantial applied research worldwide. In Italy, meat and dairy products are often the vectors of food-borne pathogens, and this is well known by the public. Most cases of food contamination and poisoning find their causes in the way food is handled after, rather than before purchase. However, a large fraction is still caused by mishandling at the industrial stage. With this in mind, we set out to estimate Italian households’ willingness to pay (WTP) for a reduction in the risk of meat and dairy food contamination using contingent valuation. The survey design incorporated features specifically conceived to overcome difficulties faced in previous survey research, especially with respect to individualized food expenditures and risk communication. In order to achieve this objective a CAPI (computer-assisted personal interview) survey was devised to tackle two major issues which emerged in previous contingent valuation studies. The first issue is connected to the way of communicating risk to consumers in order to allow them to make optimal choices and the second one to the results deriving from these studies. In fact, estimates from contingent valuation regarding food safety are given just for single products and so marketers may find it hard to extrapolate them to the aggregate. Our results show that in Italy there are segments of consumers who would benefit from higher standards of food safety for farm animal products.
Resumo:
For fifty years, computer chess has pursued an original goal of Artificial Intelligence, to produce a chess-engine to compete at the highest level. The goal has arguably been achieved, but that success has made it harder to answer questions about the relative playing strengths of man and machine. The proposal here is to approach such questions in a counter-intuitive way, handicapping or stopping-down chess engines so that they play less well. The intrinsic lack of man-machine games may be side-stepped by analysing existing games to place computer engines as accurately as possible on the FIDE ELO scale of human play. Move-sequences may also be assessed for likelihood if computer-assisted cheating is suspected.
Resumo:
GP catalyzes the phosphorylation of glycogen to Glc-1-P. Because of its fundamental role in the metabolism of glycogen, GP has been the target for a systematic structure-assisted design of inhibitory compounds, which could be of value in the therapeutic treatment of type 2 diabetes mellitus. The most potent catalytic-site inhibitor of GP identified to date is spirohydantoin of glucopyranose (hydan). In this work, we employ MD free energy simulations to calculate the relative binding affinities for GP of hydan and two spirohydantoin analogues, methyl-hydan and n-hydan, in which a hydrogen atom is replaced by a methyl- or amino group, respectively. The results are compared with the experimental relative affinities of these ligands, estimated by kinetic measurements of the ligand inhibition constants. The calculated binding affinity for methyl-hydan (relative to hydan) is 3.75 +/- 1.4 kcal/mol, in excellent agreement with the experimental value (3.6 +/- 0.2 kcal/mol). For n-hydan, the calculated value is 1.0 +/- 1.1 kcal/mol, somewhat smaller than the experimental result (2.3 +/- 0.1 kcal/mol). A free energy decomposition analysis shows that hydan makes optimum interactions with protein residues and specific water molecules in the catalytic site. In the other two ligands, structural perturbations of the active site by the additional methyl- or amino group reduce the corresponding binding affinities. The computed binding free energies are sensitive to the preference of a specific water molecule for two well-defined positions in the catalytic site. The behavior of this water is analyzed in detail, and the free energy profile for the translocation of the water between the two positions is evaluated. The results provide insights into the role of water molecules in modulating ligand binding affinities. A comparison of the interactions between a set of ligands and their surrounding groups in X-ray structures is often used in the interpretation of binding free energy differences and in guiding the design of new ligands. For the systems in this work, such an approach fails to estimate the order of relative binding strengths, in contrast to the rigorous free energy treatment.
Resumo:
Objective: This paper presents a detailed study of fractal-based methods for texture characterization of mammographic mass lesions and architectural distortion. The purpose of this study is to explore the use of fractal and lacunarity analysis for the characterization and classification of both tumor lesions and normal breast parenchyma in mammography. Materials and methods: We conducted comparative evaluations of five popular fractal dimension estimation methods for the characterization of the texture of mass lesions and architectural distortion. We applied the concept of lacunarity to the description of the spatial distribution of the pixel intensities in mammographic images. These methods were tested with a set of 57 breast masses and 60 normal breast parenchyma (dataset1), and with another set of 19 architectural distortions and 41 normal breast parenchyma (dataset2). Support vector machines (SVM) were used as a pattern classification method for tumor classification. Results: Experimental results showed that the fractal dimension of region of interest (ROIs) depicting mass lesions and architectural distortion was statistically significantly lower than that of normal breast parenchyma for all five methods. Receiver operating characteristic (ROC) analysis showed that fractional Brownian motion (FBM) method generated the highest area under ROC curve (A z = 0.839 for dataset1, 0.828 for dataset2, respectively) among five methods for both datasets. Lacunarity analysis showed that the ROIs depicting mass lesions and architectural distortion had higher lacunarities than those of ROIs depicting normal breast parenchyma. The combination of FBM fractal dimension and lacunarity yielded the highest A z value (0.903 and 0.875, respectively) than those based on single feature alone for both given datasets. The application of the SVM improved the performance of the fractal-based features in differentiating tumor lesions from normal breast parenchyma by generating higher A z value. Conclusion: FBM texture model is the most appropriate model for characterizing mammographic images due to self-affinity assumption of the method being a better approximation. Lacunarity is an effective counterpart measure of the fractal dimension in texture feature extraction in mammographic images. The classification results obtained in this work suggest that the SVM is an effective method with great potential for classification in mammographic image analysis.