837 resultados para Computer Aided Diagnosis
Resumo:
Bulk gallium nitride (GaN) power semiconductor devices are gaining significant interest in recent years, creating the need for technology computer aided design (TCAD) simulation to accurately model and optimize these devices. This paper comprehensively reviews and compares different GaN physical models and model parameters in the literature, and discusses the appropriate selection of these models and parameters for TCAD simulation. 2-D drift-diffusion semi-classical simulation is carried out for 2.6 kV and 3.7 kV bulk GaN vertical PN diodes. The simulated forward current-voltage and reverse breakdown characteristics are in good agreement with the measurement data even over a wide temperature range.
Resumo:
Virtual topology operations have been utilized to generate an analysis topology definition suitable for downstream mesh generation. Detailed descriptions are provided for virtual topology merge and split operations for all topological entities. Current virtual topology technology is extended to allow the virtual partitioning of volume cells and the topological queries required to carry out each operation are provided. Virtual representations are robustly linked to the underlying geometric definition through an analysis topology. The analysis topology and all associated virtual and topological dependencies are automatically updated after each virtual operation, providing the link to the underlying CAD geometry. Therefore, a valid description of the analysis topology, including relative orientations, is maintained. This enables downstream operations, such as the merging or partitioning of virtual entities, and interrogations, such as determining if a specific meshing strategy can be applied to the virtual volume cells, to be performed on the analysis topology description. As the virtual representation is a non-manifold description of the sub-divided domain the interfaces between cells are recorded automatically. This enables the advantages of non-manifold modelling to be exploited within the manifold modelling environment of a major commercial CAD system, without any adaptation of the underlying CAD model. A hierarchical virtual structure is maintained where virtual entities are merged or partitioned. This has a major benefit over existing solutions as the virtual dependencies are stored in an open and accessible manner, providing the analyst with the freedom to create, modify and edit the analysis topology in any preferred sequence, whilst the original CAD geometry is not disturbed. Robust definitions of the topological and virtual dependencies enable the same virtual topology definitions to be accessed, interrogated and manipulated within multiple different CAD packages and linked to the underlying geometry.
Resumo:
This paper examines the integration of a tolerance design process within the Computer-Aided Design (CAD) environment having identified the potential to create an intelligent Digital Mock-Up [1]. The tolerancing process is complex in nature and as such reliance on Computer-Aided Tolerancing (CAT) software and domain experts can create a disconnect between the design and manufacturing disciplines It is necessary to implement the tolerance design procedure at the earliest opportunity to integrate both disciplines and to reduce workload in tolerance analysis and allocation at critical stages in product development when production is imminent.
The work seeks to develop a methodology that will allow for a preliminary tolerance allocation procedure within CAD. An approach to tolerance allocation based on sensitivity analysis is implemented on a simple assembly to review its contribution to an intelligent DMU. The procedure is developed using Python scripting for CATIA V5, with analysis results aligning with those in literature. A review of its implementation and requirements is presented.
Resumo:
E-Learning-Aktivitäten von Hochschulen haben häufig eine einseitige Ausrichtung auf die Unterstützung von Lehrveranstaltungen durch Technologien, insbesondere Lernmanagementsysteme. Dabei geraten die Studierenden als Zielgruppe nur mittelbar in den Blick. Diese Beobachtung nehmen die Autorinnen und Autoren des Bandes zum Anlass, das Lern-Erleben und die unterschiedlichen Phasen des Studiums aus Perspektive der Studierenden zu betrachten. Untersucht wird zudem, welche Unterstützungsangebote Hochschulen in welchen Phasen idealerweise bereitstellen. Die Autorinnen und Autoren formulieren detailliert, wie Hochschulen das studentische Lernen mit Hilfe von Social Software unterstützen können. Diese Empfehlungen basieren auf den Ergebnissen empirischer Untersuchungen sowie auf Fallstudien nationaler und internationaler Beispiele guter Praxis, die ausführlich präsentiert werden. Mit diesem Band möchten die Autorinnen und Autoren denjenigen, die in Bildungseinrichtungen des tertiären Sektors (aber auch in anderen Sektoren) tätig sind, konkrete Anregungen liefern, Unterstützungsangebote für das informelle Lernen von Studierenden mit Social Software stärker in den Blick zu nehmen und geeignete Angebote zu entwickeln. Die Publikation basiert auf Ergebnissen des Projekts „Learner Communities of Practice“, das zwischen 2009 und 2012 als Verbundprojekt sächsischer Hochschulen mit Förderung durch das SMWK unter Leitung des Medienzentrums der TU Dresden bearbeitet wurde.
Resumo:
Three-dimensional printing (“3DP”) is an additive manufacturing technology that starts with a virtual 3D model of the object to be printed, the so-called Computer-Aided-Design (“CAD”) file. This file, when sent to the printer, gives instructions to the device on how to build the object layer-by-layer. This paper explores whether design protection is available under the current European regulatory framework for designs that are computer-created by means of CAD software, and, if so, under what circumstances. The key point is whether the appearance of a product, embedded in a CAD file, could be regarded as a protectable element under existing legislation. To this end, it begins with an inquiry into the concepts of “design” and “product”, set forth in Article 3 of the Community Design Regulation No. 6/2002 (“CDR”). Then, it considers the EUIPO’s practice of accepting 3D digital representations of designs. The enquiry goes on to illustrate the implications that the making of a CAD file available online might have. It suggests that the act of uploading a CAD file onto a 3D printing platform may be tantamount to a disclosure for the purposes of triggering unregistered design protection, and for appraising the state of the prior art. It also argues that, when measuring the individual character requirement, the notion of “informed user” and “the designer’s degree of freedom” may need to be reconsidered in the future. The following part touches on the exceptions to design protection, with a special focus on the repairs clause set forth in Article 110 CDR. The concluding part explores different measures that may be implemented to prohibit the unauthorised creation and sharing of CAD files embedding design-protected products.
Resumo:
Laser scanning is a terrestrial laser-imaging system that creates highly accurate three-dimensional images of objects for use in standard computer-aided design software packages. This report describes results of a pilot study to investigate the use of laser scanning for transportation applications in Iowa. After an initial training period on the use of the scanner and Cyclone software, pilot tests were performed on the following projects: intersection and railroad bridge for training purposes; section of highway to determine elevation accuracy and pair of bridges to determine level of detail that can be captured; new concrete pavement to determine smoothness; bridge beams to determine camber for deck-loading calculations; stockpile to determine volume; and borrow pit to determine volume. Results show that it is possible to obtain 2-6 mm precision with the laser scanner as claimed by the manufacturer compared to approximately one-inch precision with aerial photogrammetry using a helicopter. A cost comparison between helicopter photogrammetry and laser scanning showed that laser scanning was approximately 30 percent higher in cost depending on assumptions. Laser scanning can become more competitive to helicopter photogrammetry by elevating the scanner on a boom truck and capturing both sides of a divided roadway at the same time. Two- and three-dimensional drawings were created in MicroStation for one of the scanned highway bridges. It was demonstrated that it is possible to create such drawings within the accuracy of this technology. It was discovered that a significant amount of time is necessary to convert point cloud images into drawings. As this technology matures, this task should become less time consuming. It appears that laser scanning technology does indeed have a place in the Iowa Department of Transportation design and construction toolbox. Based on results from this study, laser scanning can be used cost effectively for preliminary surveys to develop TIN meshes of roadway surfaces. It also appears that this technique can be used quite effectively to measure bridge beam camber in a safer and quicker fashion compared to conventional approaches. Volume calculations are also possible using laser scanning. It seems that measuring quantities of rock could be an area where this technology would be quite beneficial since accuracy is more important with this material compared to soil. Other applications for laser scanning could include developing as-built drawings of historical structures such as the bridges of Madison County. This technology could also be useful where safety is a concern such as accurately measuring the surface of a highway active with traffic or scanning the underside of a bridge damaged by a truck. It is recommended that the Iowa Department of Transportation initially rent the scanner when it is needed and purchase the software. With time, it may be cost justifiable to purchase the scanner as well. Laser scanning consultants can be hired as well but at a higher cost.
Resumo:
Tese (Doutoramento)
Resumo:
A fully coupled non-linear effective stress response finite difference (FD) model is built to survey the counter-intuitive recent findings on the reliance of pore water pressure ratio on foundation contact pressure. Two alternative design scenarios for a benchmark problem are explored and contrasted in the light of construction emission rates using the EFFC-DFI methodology. A strain-hardening effective stress plasticity model is adopted to simulate the dynamic loading. A combination of input motions, contact pressure, initial vertical total pressure and distance to foundation centreline are employed, as model variables, to further investigate the control of permanent and variable actions on the residual pore pressure ratio. The model is verified against the Ghosh and Madabhushi high acceleration field test database. The outputs of this work is aimed to improve the current computer-aided seismic foundation design that relies on ground’s packing state and consistency. The results confirm that on seismic excitation of shallow foundations, the likelihood of effective stress loss is greater in deeper depths and across free field. For the benchmark problem, adopting a shallow foundation system instead of piled foundation benefitted in a 75% less emission rate, a marked proportion of which is owed to reduced materials and haulage carbon cost.
Resumo:
A large class of computational problems are characterised by frequent synchronisation, and computational requirements which change as a function of time. When such a problem is solved on a message passing multiprocessor machine [5], the combination of these characteristics leads to system performance which deteriorate in time. As the communication performance of parallel hardware steadily improves so load balance becomes a dominant factor in obtaining high parallel efficiency. Performance can be improved with periodic redistribution of computational load; however, redistribution can sometimes be very costly. We study the issue of deciding when to invoke a global load re-balancing mechanism. Such a decision policy must actively weigh the costs of remapping against the performance benefits, and should be general enough to apply automatically to a wide range of computations. This paper discusses a generic strategy for Dynamic Load Balancing (DLB) in unstructured mesh computational mechanics applications. The strategy is intended to handle varying levels of load changes throughout the run. The major issues involved in a generic dynamic load balancing scheme will be investigated together with techniques to automate the implementation of a dynamic load balancing mechanism within the Computer Aided Parallelisation Tools (CAPTools) environment, which is a semi-automatic tool for parallelisation of mesh based FORTRAN codes.
Resumo:
The availability of CFD software that can easily be used and produce high efficiency on a wide range of parallel computers is extremely limited. The investment and expertise required to parallelise a code can be enormous. In addition, the cost of supercomputers forces high utilisation to justify their purchase, requiring a wide range of software. To break this impasse, tools are urgently required to assist in the parallelisation process that dramatically reduce the parallelisation time but do not degrade the performance of the resulting parallel software. In this paper we discuss enhancements to the Computer Aided Parallelisation Tools (CAPTools) to assist in the parallelisation of complex unstructured mesh-based computational mechanics codes.
Resumo:
As the efficiency of parallel software increases it is becoming common to measure near linear speedup for many applications. For a problem size N on P processors then with software running at O(N=P ) the performance restrictions due to file i/o systems and mesh decomposition running at O(N) become increasingly apparent especially for large P . For distributed memory parallel systems an additional limit to scalability results from the finite memory size available for i/o scatter/gather operations. Simple strategies developed to address the scalability of scatter/gather operations for unstructured mesh based applications have been extended to provide scalable mesh decomposition through the development of a parallel graph partitioning code, JOSTLE [8]. The focus of this work is directed towards the development of generic strategies that can be incorporated into the Computer Aided Parallelisation Tools (CAPTools) project.
Resumo:
Background. Tremendous advances in biomaterials science and nanotechnologies, together with thorough research on stem cells, have recently promoted an intriguing development of regenerative medicine/tissue engineering. The nanotechnology represents a wide interdisciplinary field that implies the manipulation of different materials at nanometer level to achieve the creation of constructs that mimic the nanoscale-based architecture of native tissues. Aim. The purpose of this article is to highlight the significant new knowledges regarding this matter. Emerging acquisitions. To widen the range of scaffold materials resort has been carried out to either recombinant DNA technology-generated materials, such as a collagen-like protein, or the incorporation of bioactive molecules, such as RDG (arginine-glycine-aspartic acid), into synthetic products. Both the bottom-up and the top-down fabrication approaches may be properly used to respectively obtain sopramolecular architectures or, instead, micro-/nanostructures to incorporate them within a preexisting complex scaffold construct. Computer-aided design/manufacturing (CAD/CAM) scaffold technique allows to achieve patient-tailored organs. Stem cells, because of their peculiar properties - ability to proliferate, self-renew and specific cell-lineage differentiate under appropriate conditions - represent an attractive source for intriguing tissue engineering/regenerative medicine applications. Future research activities. New developments in the realization of different organs tissue engineering will depend on further progress of both the science of nanoscale-based materials and the knowledge of stem cell biology. Moreover the in vivo tissue engineering appears to be the logical step of the current research.
Resumo:
Contemporary integrated circuits are designed and manufactured in a globalized environment leading to concerns of piracy, overproduction and counterfeiting. One class of techniques to combat these threats is circuit obfuscation which seeks to modify the gate-level (or structural) description of a circuit without affecting its functionality in order to increase the complexity and cost of reverse engineering. Most of the existing circuit obfuscation methods are based on the insertion of additional logic (called “key gates”) or camouflaging existing gates in order to make it difficult for a malicious user to get the complete layout information without extensive computations to determine key-gate values. However, when the netlist or the circuit layout, although camouflaged, is available to the attacker, he/she can use advanced logic analysis and circuit simulation tools and Boolean SAT solvers to reveal the unknown gate-level information without exhaustively trying all the input vectors, thus bringing down the complexity of reverse engineering. To counter this problem, some ‘provably secure’ logic encryption algorithms that emphasize methodical selection of camouflaged gates have been proposed previously in literature [1,2,3]. The contribution of this paper is the creation and simulation of a new layout obfuscation method that uses don't care conditions. We also present proof-of-concept of a new functional or logic obfuscation technique that not only conceals, but modifies the circuit functionality in addition to the gate-level description, and can be implemented automatically during the design process. Our layout obfuscation technique utilizes don’t care conditions (namely, Observability and Satisfiability Don’t Cares) inherent in the circuit to camouflage selected gates and modify sub-circuit functionality while meeting the overall circuit specification. Here, camouflaging or obfuscating a gate means replacing the candidate gate by a 4X1 Multiplexer which can be configured to perform all possible 2-input/ 1-output functions as proposed by Bao et al. [4]. It is important to emphasize that our approach not only obfuscates but alters sub-circuit level functionality in an attempt to make IP piracy difficult. The choice of gates to obfuscate determines the effort required to reverse engineer or brute force the design. As such, we propose a method of camouflaged gate selection based on the intersection of output logic cones. By choosing these candidate gates methodically, the complexity of reverse engineering can be made exponential, thus making it computationally very expensive to determine the true circuit functionality. We propose several heuristic algorithms to maximize the RE complexity based on don’t care based obfuscation and methodical gate selection. Thus, the goal of protecting the design IP from malicious end-users is achieved. It also makes it significantly harder for rogue elements in the supply chain to use, copy or replicate the same design with a different logic. We analyze the reverse engineering complexity by applying our obfuscation algorithm on ISCAS-85 benchmarks. Our experimental results indicate that significant reverse engineering complexity can be achieved at minimal design overhead (average area overhead for the proposed layout obfuscation methods is 5.51% and average delay overhead is about 7.732%). We discuss the strengths and limitations of our approach and suggest directions that may lead to improved logic encryption algorithms in the future. References: [1] R. Chakraborty and S. Bhunia, “HARPOON: An Obfuscation-Based SoC Design Methodology for Hardware Protection,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 28, no. 10, pp. 1493–1502, 2009. [2] J. A. Roy, F. Koushanfar, and I. L. Markov, “EPIC: Ending Piracy of Integrated Circuits,” in 2008 Design, Automation and Test in Europe, 2008, pp. 1069–1074. [3] J. Rajendran, M. Sam, O. Sinanoglu, and R. Karri, “Security Analysis of Integrated Circuit Camouflaging,” ACM Conference on Computer Communications and Security, 2013. [4] Bao Liu, Wang, B., "Embedded reconfigurable logic for ASIC design obfuscation against supply chain attacks,"Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1,6, 24-28 March 2014.
Resumo:
International audience
Resumo:
A oportunidade de produção de biomassa microalgal tem despertado interesse pelos diversos destinos que a mesma pode ter, seja na produção de bioenergia, como fonte de alimento ou servindo como produto da biofixação de dióxido de carbono. Em geral, a produção em larga escala de cianobactérias e microalgas é feita com acompanhamento através de análises físicoquímicas offline. Neste contexto, o objetivo deste trabalho foi monitorar a concentração celular em fotobiorreator raceway para produção de biomassa microalgal usando técnicas de aquisição digital de dados e controle de processos, pela aquisição de dados inline de iluminância, concentração de biomassa, temperatura e pH. Para tal fim foi necessário construir sensor baseado em software capaz de determinar a concentração de biomassa microalgal a partir de medidas ópticas de intensidade de radiação monocromática espalhada e desenvolver modelo matemático para a produção da biomassa microalgal no microcontrolador, utilizando algoritmo de computação natural no ajuste do modelo. Foi projetado, construído e testado durante cultivos de Spirulina sp. LEB 18, em escala piloto outdoor, um sistema autônomo de registro de informações advindas do cultivo. Foi testado um sensor de concentração de biomassa baseado na medição da radiação passante. Em uma segunda etapa foi concebido, construído e testado um sensor óptico de concentração de biomassa de Spirulina sp. LEB 18 baseado na medição da intensidade da radiação que sofre espalhamento pela suspensão da cianobactéria, em experimento no laboratório, sob condições controladas de luminosidade, temperatura e fluxo de suspensão de biomassa. A partir das medidas de espalhamento da radiação luminosa, foi construído um sistema de inferência neurofuzzy, que serve como um sensor por software da concentração de biomassa em cultivo. Por fim, a partir das concentrações de biomassa de cultivo, ao longo do tempo, foi prospectado o uso da plataforma Arduino na modelagem empírica da cinética de crescimento, usando a Equação de Verhulst. As medidas realizadas no sensor óptico baseado na medida da intensidade da radiação monocromática passante através da suspensão, usado em condições outdoor, apresentaram baixa correlação entre a concentração de biomassa e a radiação, mesmo para concentrações abaixo de 0,6 g/L. Quando da investigação do espalhamento óptico pela suspensão do cultivo, para os ângulos de 45º e 90º a radiação monocromática em 530 nm apresentou um comportamento linear crescente com a concentração, apresentando coeficiente de determinação, nos dois casos, 0,95. Foi possível construir um sensor de concentração de biomassa baseado em software, usando as informações combinadas de intensidade de radiação espalhada nos ângulos de 45º e 135º com coeficiente de determinação de 0,99. É factível realizar simultaneamente a determinação inline de variáveis do processo de cultivo de Spirulina e a modelagem cinética empírica do crescimento do micro-organismo através da equação de Verhulst, em microcontrolador Arduino.