929 resultados para space time code


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evidence exists that both right and left hemisphere attentional mechanisms are mobilized when attention is directed to the right visual hemifield and only right hemisphere attentional mechanisms are mobilized when attention is directed to the left visual hemifield. This arrangement might lead to a rightward bias of automatic attention. The hypothesis was investigated by testing male volunteers, wherein a ""location discrimination"" reaction time task (Experiments 1 and 3) and a ""location and shape discrimination"" reaction time task (Experiments 2 and 4) were used. Unilateral (Experiments 1 and 2) and unilateral or bilateral (Experiments 3 and 4) peripheral visual prime stimuli were used to control attention. Reaction time to a small visual target stimulus in the same location or in the horizontally opposite location was evaluated. Stimulus onset asynchronies (SOAs) were 34, 50, 67, 83 and 100 ms. An important prime stimulus attentional effect was observed as early as 50 ms in the four experiments. In Experiments 2, 3 and 4, this effect was larger when the prime stimulus occurred in the right hemifield than when it occurred in the left hemifield for SOA 100 ms. In Experiment 4, when the prime stimulus occurred simultaneously in both hemifields, reaction time was faster for the right hemifield and for SOA 100 ms. These results indicate that automatic attention tends to favor the right side of space, particularly when identification of the target stimulus shape is required. (c) 2007 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Information Visualization, adding and removing data elements can strongly impact the underlying visual space. We have developed an inherently incremental technique (incBoard) that maintains a coherent disposition of elements from a dynamic multidimensional data set on a 2D grid as the set changes. Here, we introduce a novel layout that uses pairwise similarity from grid neighbors, as defined in incBoard, to reposition elements on the visual space, free from constraints imposed by the grid. The board continues to be updated and can be displayed alongside the new space. As similar items are placed together, while dissimilar neighbors are moved apart, it supports users in the identification of clusters and subsets of related elements. Densely populated areas identified in the incSpace can be efficiently explored with the corresponding incBoard visualization, which is not susceptible to occlusion. The solution remains inherently incremental and maintains a coherent disposition of elements, even for fully renewed sets. The algorithm considers relative positions for the initial placement of elements, and raw dissimilarity to fine tune the visualization. It has low computational cost, with complexity depending only on the size of the currently viewed subset, V. Thus, a data set of size N can be sequentially displayed in O(N) time, reaching O(N (2)) only if the complete set is simultaneously displayed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Public genealogical databases are becoming increasingly populated with historical data and records of the current population`s ancestors. As this increasing amount of available information is used to link individuals to their ancestors, the resulting trees become deeper and more dense, which justifies the need for using organized, space-efficient layouts to display the data. Existing layouts are often only able to show a small subset of the data at a time. As a result, it is easy to become lost when navigating through the data or to lose sight of the overall tree structure. On the contrary, leaving space for unknown ancestors allows one to better understand the tree`s structure, but leaving this space becomes expensive and allows fewer generations to be displayed at a time. In this work, we propose that the H-tree based layout be used in genealogical software to display ancestral trees. We will show that this layout presents an increase in the number of displayable generations, provides a nicely arranged, symmetrical, intuitive and organized fractal structure, increases the user`s ability to understand and navigate through the data, and accounts for the visualization requirements necessary for displaying such trees. Finally, user-study results indicate potential for user acceptance of the new layout.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Radial transport in the tokamap, which has been proposed as a simple model for the motion in a stochastic plasma, is investigated. A theory for previous numerical findings is presented. The new results are stimulated by the fact that the radial diffusion coefficients is space-dependent. The space-dependence of the transport coefficient has several interesting effects which have not been elucidated so far. Among the new findings are the analytical predictions for the scaling of the mean radial displacement with time and the relation between the Fokker-Planck diffusion coefficient and the diffusion coefficient from the mean square displacement. The applicability to other systems is also discussed. (c) 2009 WILEY-VCH GmbH & Co. KGaA, Weinheim

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of Fock space representation is developed to deal with stochastic spin lattices written in terms of fermion operators. A density operator is introduced in order to follow in parallel the developments of the case of bosons in the literature. Some general conceptual quantities for spin lattices are then derived, including the notion of generating function and path integral via Grassmann variables. The formalism is used to derive the Liouvillian of the d-dimensional Linear Glauber dynamics in the Fock-space representation. Then the time evolution equations for the magnetization and the two-point correlation function are derived in terms of the number operator. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adequate initial configurations for molecular dynamics simulations consist of arrangements of molecules distributed in space in such a way to approximately represent the system`s overall structure. In order that the simulations are not disrupted by large van der Waals repulsive interactions, atoms from different molecules Must keep safe pairwise distances. Obtaining Such a molecular arrangement can be considered it packing problem: Each type molecule must satisfy spatial constraints related to the geometry of the system, and the distance between atoms of different molecules Must be greater than some specified tolerance. We have developed a code able to pack millions of atoms. grouped in arbitrarily complex molecules, inside a variety of three-dimensional regions. The regions may be intersections of spheres, ellipses, cylinders, planes, or boxes. The user must provide only the structure of one molecule of each type and the geometrical constraints that each type of molecule must satisfy. Building complex mixtures, interfaces, solvating biomolecules in water, other solvents, or mixtures of solvents, is straight forward. In addition. different atoms belonging to the same molecule may also be restricted to different spatial regions, in Such a way that more ordered molecular arrangements call be built, as micelles. lipid double-layers, etc. The packing time for state-of-the-art molecular dynamics systems varies front a few seconds to a few Minutes in a personal Computer. The input files are simple and Currently compatible with PDB, Tinker, Molden, or Moldy coordinate files. The package is distributed as free software and call be downloaded front http://www.ime.unicamp.br/similar to martinez/packmol/. (C) 2009 Wiley Periodicals. Inc. J Comput Chem 30: 2157-2164, 2009

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work aims at combining the Chaos theory postulates and Artificial Neural Networks classification and predictive capability, in the field of financial time series prediction. Chaos theory, provides valuable qualitative and quantitative tools to decide on the predictability of a chaotic system. Quantitative measurements based on Chaos theory, are used, to decide a-priori whether a time series, or a portion of a time series is predictable, while Chaos theory based qualitative tools are used to provide further observations and analysis on the predictability, in cases where measurements provide negative answers. Phase space reconstruction is achieved by time delay embedding resulting in multiple embedded vectors. The cognitive approach suggested, is inspired by the capability of some chartists to predict the direction of an index by looking at the price time series. Thus, in this work, the calculation of the embedding dimension and the separation, in Takens‘ embedding theorem for phase space reconstruction, is not limited to False Nearest Neighbor, Differential Entropy or other specific method, rather, this work is interested in all embedding dimensions and separations that are regarded as different ways of looking at a time series by different chartists, based on their expectations. Prior to the prediction, the embedded vectors of the phase space are classified with Fuzzy-ART, then, for each class a back propagation Neural Network is trained to predict the last element of each vector, whereas all previous elements of a vector are used as features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study covers a period when society changed from a pre-industrial agricultural society to a post-industrial service-producing society. Parallel with this social transformation, major population changes took place. In this study, we analyse how local population changes are affected by neighbouring populations. To do so we use the last 200 years of local population change that redistributed population in Sweden. We use literature to identify several different processes and spatial dependencies in the redistribution between a parish and its surrounding parishes. The analysis is based on a unique unchanged historical parish division, and we use an index of local spatial correlation to describe different kinds of spatial dependencies that have influenced the redistribution of the population. To control inherent time dependencies, we introduce a non-separable spatial temporal correlation model into the analysis of population redistribution. Hereby, several different spatial dependencies can be observed simultaneously over time. The main conclusions are that while local population changes have been highly dependent on the neighbouring populations in the 19th century, this spatial dependence have become insignificant already when two parishes is separated by 5 kilometres in the late 20th century. Another conclusion is that the time dependency in the population change is higher when the population redistribution is weak, as it currently is and as it was during the 19th century until the start of industrial revolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stora och komplexa kodbaser med bristfällig kodförståelse är ett problem som blir allt vanligare bland företag idag. Bristfällig kodförståelse resulterar i längre tidsåtgång vid underhåll och modifiering av koden, vilket för ett företag leder till ökade kostnader. Clean Code anses enligt somliga vara lösningen på detta problem. Clean Code är en samling riktlinjer och principer för hur man skriver kod som är enkel att förstå och underhålla. Ett kunskapsglapp identifierades vad gäller empirisk data som undersöker Clean Codes påverkan på kodförståelse. Studiens frågeställning var: Hur påverkas förståelsen vid modifiering av kod som är refaktoriserad enligt Clean Code principerna för namngivning och att skriva funktioner? För att undersöka hur Clean Code påverkar kodförståelsen utfördes ett fältexperiment tillsammans med företaget CGM Lab Scandinavia i Borlänge, där data om tidsåtgång och upplevd förståelse hos testdeltagare samlades in och analyserades. Studiens resultat visar ingen tydlig förbättring eller försämring av kodförståelsen då endast den upplevda kodförståelsen verkar påverkas. Alla testdeltagare föredrar Clean Code framför Dirty Code även om tidsåtgången inte påverkas. Detta leder fram till slutsatsen att Clean Codes effekter kanske inte är omedelbara då utvecklare inte hunnit anpassa sig till Clean Code, och därför inte kan utnyttja det till fullo. Studien ger en fingervisning om Clean Codes potential att förbättra kodförståelsen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, spoke about the importance of image compression for the industry, it is known that processing and image storage is always a challenge in petrobrás to optimize the storage time and store a maximum number of images and data. We present an interactive system for processing and storing images in the wavelet domain and an interface for digital image processing. The proposal is based on the Peano function and wavelet transform in 1D. The storage system aims to optimize the computational space, both for storage and for transmission of images. Being necessary to the application of the Peano function to linearize the images and the 1D wavelet transform to decompose it. These applications allow you to extract relevant information for the storage of an image with a lower computational cost and with a very small margin of error when comparing the images, original and processed, ie, there is little loss of quality when applying the processing system presented . The results obtained from the information extracted from the images are displayed in a graphical interface. It is through the graphical user interface that the user uses the files to view and analyze the results of the programs directly on the computer screen without the worry of dealing with the source code. The graphical user interface, programs for image processing via Peano Function and Wavelet Transform 1D, were developed in Java language, allowing a direct exchange of information between them and the user

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ethanol is the most overused psychoactive drug over the world; this fact makes it one of the main substances required in toxicological exams nowadays. The development of an analytical method, adaptation or implementation of a method known, involves a process of validation that estimates its efficiency in the laboratory routine and credibility of the method. The stability is defined as the ability of the sample of material to keep the initial value of a quantitative measure for a defined period within specific limits when stored under defined conditions. This study aimed to evaluate the method of Gas chromatography and study the stability of ethanol in blood samples, considering the variables time and temperature of storage, and the presence of preservative and, with that check if the conditions of conservation and storage used in this study maintain the quality of the sample and preserve the originally amount of analyte present. Blood samples were collected from 10 volunteers to evaluate the method and to study the stability of ethanol. For the evaluation of the method, part of the samples was added to known concentrations of ethanol. In the study of stability, the other side of the pool of blood was placed in two containers: one containing the preservative sodium fluoride 1% and the anticoagulant heparin and the other only heparin, was added ethanol at a concentration of 0.6 g/L, fractionated in two bottles, one being stored at 4ºC (refrigerator) and another at -20ºC (freezer), the tests were performed on the same day (time zero) and after 1, 3, 7, 14, 30 and 60 days of storage. The assessment found the difference in results during storage in relation to time zero. It used the technique of headspace associated with gas chromatography with the FID and capillary column with stationary phase of polyethylene. The best analysis of chromatographic conditions were: temperature of 50ºC (column), 150ºC (jet) and 250ºC (detector), with retention time for ethanol from 9.107 ± 0.026 and the tercbutanol (internal standard) of 8.170 ± 0.081 minutes, the ethanol being separated properly from acetaldehyde, acetone, methanol and 2-propanol, which are potential interfering in the determination of ethanol. The technique showed linearity in the concentration range of 0.01 and 3.2 g/L (0.8051 x + y = 0.6196; r2 = 0.999). The calibration curve showed the following equation of the line: y = x 0.7542 + 0.6545, with a linear correlation coefficient equal to 0.996. The average recovery was 100.2%, the coefficients of variation of accuracy and inter intra test showed values of up to 7.3%, the limit of detection and quantification was 0.01 g/L and showed coefficient of variation within the allowed. The analytical method evaluated in this study proved to be fast, efficient and practical, given the objective of this work satisfactorily. The study of stability has less than 20% difference in the response obtained under the conditions of storage and stipulated period, compared with the response obtained at time zero and at the significance level of 5%, no statistical difference in the concentration of ethanol was observed between analysis. The results reinforce the reliability of the method of gas chromatography and blood samples in search of ethanol, either in the toxicological, forensic, social or clinic

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Alcohol is one of the few psychotropic drugs that their consumption has admitted legally and sometimes encouraged by the society. Studies show alcohol as the highest consumption of drugs among young people and society in general, probably because of its availability and easy access. The abuse causes public health problems, which was closely related to the violence, socioeconomic problems and the high number of automobile accidents. Transit is one of the main sectors affected by the effects of alcohol, observing a high incidence in the studies. About half of automobile accidents occurs after the consumption of alcoholic beverage, and the vast majority of cases related to high concentrations of alcohol in the bloodstream. The relationship of drunk with traffic accidents is in fact evident everywhere in the world, including Brazil, where studies have shown a high relationship between alcohol consumption and traffic accidents. This study determined the alcohol in fatal victims of traffic accidents in the state of Rio Grande do Norte and established the profile of this population compared with those found in Brazil and other countries. Samples of blood of ethanol added to fulfillment of the standardization of chromatographic conditions and procedures for the analysis, being employed in the determination of alcohol in blood samples of 277 victims of traffic accidents, collected at the Institute of Scientific Technical Police of Rio Grande do North (ITEP) in the year 2007. The blood alcohol level was determined in these samples correlated with the sex, age and marital status of the victim and the location, day of week and month when the accident occurred, is doing a statistical analysis and outlining a profile of the victims of an accident at transit in the state of Rio Grande do Norte. The parameters of standardization studied ensured the quality of the analytical method and, consequently, to obtain reliable laboratory results. Being given the best temperature for injector (150 ºC), detector (250 ºC) and column (50 ºC) with a flow of gas in the column of 2mL/minutos and analysis of time of 12 minutes. The method was linear in the range of 0.01 to 3.2 g / L (r2 = 0.9989) with average recovery of 100.2% and precision with coefficient of variation less than 15%. The analysis carried out on victims of fatal road traffic accidents, ethanol detected in the blood in 66.43% of the victims and these, 96% showed concentration ≥ 0.2 g / L, 87.73% of victims were male, while 12.27% female. The younger age group (1535 years) was the most involved (52,35%) and most single (55.60%). The accidents occurred with greater prevalence in the day on Monday (27%) followed by Sunday (24,19%) and Saturday (15,52%) and it was found that the prevalence of injuries varied between the different months of the year, and in February (14.4%) and April (10.47%) the months that had a higher number of accidents, however this oscillation showed no statistically significant difference. Also no significant difference was observed between the tracks of concentration found in men and women. The standardized method showed to be efficient, given satisfactorily to the goals of this work, and the high levels of alcohol found in victims of fatal road traffic accidents are consistent with several studies of literature, and the profile of the victim also supported by presenting in its most young adults, male and single

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the absence of the selective availability, which was turned off on May 1, 2000, the ionosphere can be the largest source of error in GPS positioning and navigation. Its effects on GPS observable cause a code delays and phase advances. The magnitude of this error is affected by the local time of the day, season, solar cycle, geographical location of the receiver and Earth's magnetic field. As it is well known, the ionosphere is the main drawback for high accuracy positioning, when using single frequency receivers, either for point positioning or relative positioning of medium and long baselines. The ionosphere effects were investigated in the determination of point positioning and relative positioning using single frequency data. A model represented by a Fourier series type was implemented and the parameters were estimated from data collected at the active stations of RBMC (Brazilian Network for Continuous Monitoring of GPS satellites). The data input were the pseudorange observables filtered by the carrier phase. Quality control was implemented in order to analyse the adjustment and to validate the significance of the estimated parameters. Experiments were carried out in the equatorial region, using data collected from dual frequency receivers. In order to validate the model, the estimated values were compared with ground truth. For point and relative positioning of baselines of approximately 100 km, the values of the discrepancies indicated an error reduction better than 80% and 50% respectively, compared to the processing without the ionospheric model. These results give an indication that more research has to be done in order to provide support to the L1 GPS users in the Equatorial region.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)