943 resultados para Java simulations
Resumo:
We demonstrate that the time-dependent projected Gross-Pitaevskii equation (GPE) derived earlier [M. J. Davis, R. J. Ballagh, and K. Burnett, J. Phys. B 34, 4487 (2001)] can represent the highly occupied modes of a homogeneous, partially-condensed Bose gas. Contrary to the often held belief that the GPE is valid only at zero temperature, we find that this equation will evolve randomized initial wave functions to a state describing thermal equilibrium. In the case of small interaction strengths or low temperatures, our numerical results can be compared to the predictions of Bogoliubov theory and its perturbative extensions. This demonstrates the validity of the GPE in these limits and allows us to assign a temperature to the simulations unambiguously. However, the GPE method is nonperturbative, and we believe it can be used to describe the thermal properties of a Bose gas even when Bogoliubov theory fails. We suggest a different technique to measure the temperature of our simulations in these circumstances. Using this approach we determine the dependence of the condensate fraction and specific heat on temperature for several interaction strengths, and observe the appearance of vortex networks. Interesting behavior near the critical point is observed and discussed.
Resumo:
With the advent of object-oriented languages and the portability of Java, the development and use of class libraries has become widespread. Effective class reuse depends on class reliability which in turn depends on thorough testing. This paper describes a class testing approach based on modeling each test case with a tuple and then generating large numbers of tuples to thoroughly cover an input space with many interesting combinations of values. The testing approach is supported by the Roast framework for the testing of Java classes. Roast provides automated tuple generation based on boundary values, unit operations that support driver standardization, and test case templates used for code generation. Roast produces thorough, compact test drivers with low development and maintenance cost. The framework and tool support are illustrated on a number of non-trivial classes, including a graphical user interface policy manager. Quantitative results are presented to substantiate the practicality and effectiveness of the approach. Copyright (C) 2002 John Wiley Sons, Ltd.
Quantification and assessment of fault uncertainty and risk using stochastic conditional simulations
Resumo:
A recent malaria epidemic in the Menoreh Hills of Central Java has increased concern about the re-emergence of endemic malaria on Java, which threatens the island's 120 million residents. A 28-day, in-vivo test of the efficacy of treatment of malaria with antimalarial drugs was conducted among 16 7 villagers in the Menoreh Hills. The treatments investigated, chloroquine (CQ) and sulfadoxine pyrimethamine (SP), constitute, respectively, the first- and second-line treatments for uncomplicated malaria in Indonesia. The prevalence of malaria among 1389 residents screened prior to enrollment was 33%. Treatment outcomes were assessed by microscopical diagnoses, PCR-based confirmation of the diagnoses, measurement of the whole-blood concentrations of CQ and desethylchloroquine (DCQ), and identification of the Plasmodium falciparum genotypes. The 28-day cumulative incidences of therapeutic failure for CQ and SP were, respectively, 47% (N = 36) and 22% (N = 50) in the treatment of P. falciparum, and 18% (N = 77) and 67% (N = 6) in the treatment of P. vivax. Chloroquine was thus an ineffective therapy for P. falciparum malaria, and the presence of CQ-resistant P. vivax and SP-resistant P. falciparum will further compromise efforts to control resurgent malaria on Java.
Resumo:
One of the most important advantages of database systems is that the underlying mathematics is rich enough to specify very complex operations with a small number of statements in the database language. This research covers an aspect of biological informatics that is the marriage of information technology and biology, involving the study of real-world phenomena using virtual plants derived from L-systems simulation. L-systems were introduced by Aristid Lindenmayer as a mathematical model of multicellular organisms. Not much consideration has been given to the problem of persistent storage for these simulations. Current procedures for querying data generated by L-systems for scientific experiments, simulations and measurements are also inadequate. To address these problems the research in this paper presents a generic process for data-modeling tools (L-DBM) between L-systems and database systems. This paper shows how L-system productions can be generically and automatically represented in database schemas and how a database can be populated from the L-system strings. This paper further describes the idea of pre-computing recursive structures in the data into derived attributes using compiler generation. A method to allow a correspondence between biologists' terms and compiler-generated terms in a biologist computing environment is supplied. Once the L-DBM gets any specific L-systems productions and its declarations, it can generate the specific schema for both simple correspondence terminology and also complex recursive structure data attributes and relationships.
Resumo:
The movements of the ricefield rats (Rattus argentiventer) near a trap-barrier system (TBS) were assessed in lowland flood-irrigated rice crops in West Java, Indonesia, to test the hypothesis that a TBS with a 'trap-crop' modifies the movements of rats within 200 m from the trap-crop. The home range use and locations of rat burrows were assessed using radiotelemetry at two sites, one with a TBS with trap-crop (Treatment site, the crop inside the fence was planted 3 weeks earlier than the surrounding crop) and the other with a TBS without trap-crop (Control site, the crop inside the fence was planted at the same time as the surrounding crop). Each TBS was a 50 x 50 m plastic fence with eight multiple-capture rat traps set at the base. More than 700 rats were caught in the TBS with trap-crop, whereas only 10 rats were caught in the TBS without trap-crop. The home range size of females was significantly smaller at the Treatment site (0.96 ha) than the Control site (2.99 ha), but there was no difference for males. Seventy-eight per cent of rats caught in the TBS and fitted with radiocollars had their daytime burrow locations within 200 m of the TBS. We could not determine if the rats caught in the TBS were residents or transients according to demographic parameters. Our results support the hypothesis that a TBS with a trap-crop protects the surrounding rice crop out to a distance of at least 200 m.
Resumo:
High-resolution numerical model simulations have been used to study the local and mesoscale thermal circulations in an Alpine lake basin. The lake (87 km(2)) is situated in the Southern Alps, New Zealand and is located in a glacially excavated rock basin surrounded by mountain ranges that reach 3000 m in height. The mesoscale model used (RAMS) is a three-dimensional non-hydrostatic model with a level 2.5 turbulence closure scheme. The model demonstrates that thermal forcing at local (within the basin) and regional (coast-to-basin inflow) scales drive the observed boundary-layer airflow in the lake basin during clear anticyclonic summertime conditions. The results show that the lake can modify (perturb) both the local and regional wind systems. Following sunrise, local thermal circulations dominate, including a lake breeze component that becomes embedded within the background valley wind system. This results in a more divergent flow in the basin extending across the lake shoreline. However, a closed lake breeze circulation is neither observed nor modelled. Modelling results indicate that in the latter part of the day when the mesoscale (coast-to-basin) inflow occurs, the relatively cold pool of lake air in the basin can cause the intrusion to decouple from the surface. Measured data provide qualitative and quantitative support for the model results.
Resumo:
Concurrent programs are hard to test due to the inherent nondeterminism. This paper presents a method and tool support for testing concurrent Java components. Too[ support is offered through ConAn (Concurrency Analyser), a too] for generating drivers for unit testing Java classes that are used in a multithreaded context. To obtain adequate controllability over the interactions between Java threads, the generated driver contains threads that are synchronized by a clock. The driver automatically executes the calls in the test sequence in the prescribed order and compares the outputs against the expected outputs specified in the test sequence. The method and tool are illustrated in detail on an asymmetric producer-consumer monitor. Their application to testing over 20 concurrent components, a number of which are sourced from industry and were found to contain faults, is presented and discussed.
Resumo:
Abstract. Interest in design and development of graphical user interface (GUIs) is growing in the last few years. However, correctness of GUI's code is essential to the correct execution of the overall software. Models can help in the evaluation of interactive applications by allowing designers to concentrate on its more important aspects. This paper describes our approach to reverse engineering abstract GUI models directly from the Java/Swing code.
Resumo:
The main intend of this work, is to determinate the Specific Absorption Rate (SAR) on human head tissues exposed to radiation caused by sources of 900 and 1800MHz, since those are the typical frequencies for mobile communications systems nowadays. In order to determinate the SAR, has been used the FDTD (Finite Difference Time Domain), which is a numeric method in time domain, obtained from the Maxwell equations in differential mode. In order to do this, a computational model from the human head in two dimensions made with cells of the smallest possible size was implemented, respecting the limits from computational processing. It was possible to verify the very good efficiency of the FDTD method in the resolution of those types of problems.
Resumo:
As comunicações electrónicas são cada vez mais o meio de eleição para negócios entre entidades e para as relações entre os cidadãos e o Estado (e-government). Esta diversidade de transacções envolve, muitas vezes, informação sensível e com possível valor legal. Neste contexto, as assinaturas electrónicas são uma importante base de confiança, fornecendo garantias de integridade e autenticação entre os intervenientes. A produção de uma assinatura digital resulta não só no valor da assinatura propriamente dita, mas também num conjunto de informação adicional acerca da mesma, como o algoritmo de assinatura, o certificado de validação ou a hora e local de produção. Num cenário heterogéneo como o descrito anteriormente, torna-se necessária uma forma flexível e interoperável de descrever esse tipo de informação. A linguagem XML é uma forma adequada de representar uma assinatura neste contexto, não só pela sua natureza estruturada, mas principalmente por ser baseada em texto e ter suporte generalizado. A recomendação XML Signature Syntax and Processing (ou apenas XML Signature) foi o primeiro passo na representação de assinaturas em XML. Nela são definidas sintaxe e regras de processamento para criar, representar e validar assinaturas digitais. As assinaturas XML podem ser aplicadas a qualquer tipo de conteúdos digitais identificáveis por um URI, tanto no mesmo documento XML que a assinatura, como noutra qualquer localização. Além disso, a mesma assinatura XML pode englobar vários recursos, mesmo de tipos diferentes (texto livre, imagens, XML, etc.). À medida que as assinaturas electrónicas foram ganhando relevância tornou-se evidente que a especificação XML Signature não era suficiente, nomeadamente por não dar garantias de validade a longo prazo nem de não repudiação. Esta situação foi agravada pelo facto da especificação não cumprir os requisitos da directiva 1999/93/EC da União Europeia, onde é estabelecido um quadro legal para as assinaturas electrónicas a nível comunitário. No seguimento desta directiva da União Europeia foi desenvolvida a especificação XML Advanced Electronic Signatures que define formatos XML e regras de processamento para assinaturas electrónicas não repudiáveis e com validade verificável durante períodos de tempo extensos, em conformidade com a directiva. Esta especificação estende a recomendação XML Signature, definindo novos elementos que contêm informação adicional acerca da assinatura e dos recursos assinados (propriedades qualificadoras). A plataforma Java inclui, desde a versão 1.6, uma API de alto nível para serviços de assinaturas digitais em XML, de acordo com a recomendação XML Signature. Contudo, não existe suporte para assinaturas avançadas. Com este projecto pretende-se desenvolver uma biblioteca Java para a criação e validação de assinaturas XAdES, preenchendo assim a lacuna existente na plataforma. A biblioteca desenvolvida disponibiliza uma interface com alto nível de abstracção, não tendo o programador que lidar directamente com a estrutura XML da assinatura nem com os detalhes do conteúdo das propriedades qualificadoras. São definidos tipos que representam os principais conceitos da assinatura, nomeadamente as propriedades qualificadoras e os recursos assinados, sendo os aspectos estruturais resolvidos internamente. Neste trabalho, a informação que compõe uma assinatura XAdES é dividia em dois grupos: o primeiro é formado por características do signatário e da assinatura, tais como a chave e as propriedades qualificadoras da assinatura. O segundo grupo é composto pelos recursos assinados e as correspondentes propriedades qualificadoras. Quando um signatário produz várias assinaturas em determinado contexto, o primeiro grupo de características será semelhante entre elas. Definiu-se o conjunto invariante de características da assinatura e do signatário como perfil de assinatura. O conceito é estendido à verificação de assinaturas englobando, neste caso, a informação a usar nesse processo, como por exemplo os certificados raiz em que o verificador confia. Numa outra perspectiva, um perfil constitui uma configuração do serviço de assinatura correspondente. O desenho e implementação da biblioteca estão também baseados no conceito de fornecedor de serviços. Um fornecedor de serviços é uma entidade que disponibiliza determinada informação ou serviço necessários à produção e verificação de assinaturas, nomeadamente: selecção de chave/certificado de assinatura, validação de certificados, interacção com servidores de time-stamp e geração de XML. Em vez de depender directamente da informação em causa, um perfil — e, consequentemente, a operação correspondente — é configurado com fornecedores de serviços que são invocados quando necessário. Para cada tipo de fornecedor de serviços é definida um interface, podendo as correspondentes implementações ser configuradas de forma independente. A biblioteca inclui implementações de todos os fornecedores de serviços, sendo algumas delas usadas for omissão na produção e verificação de assinaturas. Uma vez que o foco do projecto é a especificação XAdES, o processamento e estrutura relativos ao formato básico são delegados internamente na biblioteca Apache XML Security, que disponibiliza uma implementação da recomendação XML Signature. Para validar o funcionamento da biblioteca, nomeadamente em termos de interoperabilidade, procede-se, entre outros, à verificação de um conjunto de assinaturas produzidas por Estados Membros da União Europeia, bem como por outra implementação da especificação XAdES.
Resumo:
We carry out systematic Monte Carlo simulations of Go lattice proteins to investigate and compare the folding processes of two model proteins whose native structures differ from each other due to the presence of a trefoil knot located near the terminus of one of the protein chains. We show that the folding time of the knotted fold is larger than that of the unknotted protein and that this difference in folding time is particularly striking in the temperature region below the optimal folding temperature. Both proteins display similar folding transition temperatures, which is indicative of similar thermal stabilities. By using the folding probability reaction coordinate as an estimator of folding progression we have found out that the formation of the knot is mainly a late folding event in our shallow knot system.
Resumo:
The rapid growth in genetics and molecular biology combined with the development of techniques for genetically engineering small animals has led to increased interest in in vivo small animal imaging. Small animal imaging has been applied frequently to the imaging of small animals (mice and rats), which are ubiquitous in modeling human diseases and testing treatments. The use of PET in small animals allows the use of subjects as their own control, reducing the interanimal variability. This allows performing longitudinal studies on the same animal and improves the accuracy of biological models. However, small animal PET still suffers from several limitations. The amounts of radiotracers needed, limited scanner sensitivity, image resolution and image quantification issues, all could clearly benefit from additional research. Because nuclear medicine imaging deals with radioactive decay, the emission of radiation energy through photons and particles alongside with the detection of these quanta and particles in different materials make Monte Carlo method an important simulation tool in both nuclear medicine research and clinical practice. In order to optimize the quantitative use of PET in clinical practice, data- and image-processing methods are also a field of intense interest and development. The evaluation of such methods often relies on the use of simulated data and images since these offer control of the ground truth. Monte Carlo simulations are widely used for PET simulation since they take into account all the random processes involved in PET imaging, from the emission of the positron to the detection of the photons by the detectors. Simulation techniques have become an importance and indispensable complement to a wide range of problems that could not be addressed by experimental or analytical approaches.
Resumo:
Object-oriented programming languages presently are the dominant paradigm of application development (e. g., Java,. NET). Lately, increasingly more Java applications have long (or very long) execution times and manipulate large amounts of data/information, gaining relevance in fields related with e-Science (with Grid and Cloud computing). Significant examples include Chemistry, Computational Biology and Bio-informatics, with many available Java-based APIs (e. g., Neobio). Often, when the execution of such an application is terminated abruptly because of a failure (regardless of the cause being a hardware of software fault, lack of available resources, etc.), all of its work already performed is simply lost, and when the application is later re-initiated, it has to restart all its work from scratch, wasting resources and time, while also being prone to another failure and may delay its completion with no deadline guarantees. Our proposed solution to address these issues is through incorporating mechanisms for checkpointing and migration in a JVM. These make applications more robust and flexible by being able to move to other nodes, without any intervention from the programmer. This article provides a solution to Java applications with long execution times, by extending a JVM (Jikes research virtual machine) with such mechanisms. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
Over time, XML markup language has acquired a considerable importance in applications development, standards definition and in the representation of large volumes of data, such as databases. Today, processing XML documents in a short period of time is a critical activity in a large range of applications, which imposes choosing the most appropriate mechanism to parse XML documents quickly and efficiently. When using a programming language for XML processing, such as Java, it becomes necessary to use effective mechanisms, e.g. APIs, which allow reading and processing of large documents in appropriated manners. This paper presents a performance study of the main existing Java APIs that deal with XML documents, in order to identify the most suitable one for processing large XML files