946 resultados para Fast virtual stenting method
Resumo:
A reinforcement learning (RL) method was used to train a virtual character to move participants to a specified location. The virtual environment depicted an alleyway displayed through a wide field-of-view head-tracked stereo head-mounted display. Based on proxemics theory, we predicted that when the character approached within a personal or intimate distance to the participants, they would be inclined to move backwards out of the way. We carried out a between-groups experiment with 30 female participants, with 10 assigned arbitrarily to each of the following three groups: In the Intimate condition the character could approach within 0.38m and in the Social condition no nearer than 1.2m. In the Random condition the actions of the virtual character were chosen randomly from among the same set as in the RL method, and the virtual character could approach within 0.38m. The experiment continued in each case until the participant either reached the target or 7 minutes had elapsed. The distributions of the times taken to reach the target showed significant differences between the three groups, with 9 out of 10 in the Intimate condition reaching the target significantly faster than the 6 out of 10 who reached the target in the Social condition. Only 1 out of 10 in the Random condition reached the target. The experiment is an example of applied presence theory: we rely on the many findings that people tend to respond realistically in immersive virtual environments, and use this to get people to achieve a task of which they had been unaware. This method opens up the door for many such applications where the virtual environment adapts to the responses of the human participants with the aim of achieving particular goals.
Resumo:
Previous studies have examined the experience of owning a virtual surrogate body or body part through specific combinations of cross-modal multisensory stimulation. Both visuomotor (VM) and visuotactile (VT) synchronous stimulation have been shown to be important for inducing a body ownership illusion, each tested separately or both in combination. In this study we compared the relative importance of these two cross-modal correlations, when both are provided in the same immersive virtual reality setup and the same experiment. We systematically manipulated VT and VM contingencies in order to assess their relative role and mutual interaction. Moreover, we present a new method for measuring the induced body ownership illusion through time, by recording reports of breaks in the illusion of ownership ("breaks") throughout the experimental phase. The balance of the evidence, from both questionnaires and analysis of the breaks, suggests that while VM synchronous stimulation contributes the greatest to the attainment of the illusion, a disruption of either (through asynchronous stimulation) contributes equally to the probability of a break in the illusion.
Resumo:
Tutkimuksen tavoitteena on ennakoida liiketoimintaprosessien sähköistymisen kehittymistä käyttämällä skenaariomenetelmää, yhtä laajimmin käytetyistä tulevaisuuden tutkimisen menetelmistä. Tarkastelun kohteena ovat erityisesti tulevaisuuden e-business -ratkaisut metsäteollisuudessa. Tutkimuksessa selvitetään skenaariomenetelmän ominaisuuksia, skenaariosuunnittelun periaatteita sekä menetelmän sopivuutta teknologian ja toimialan muutosten tarkasteluun. Tutkimuksen teoriaosassa selvitetään teknologian muutoksen vaikutusta toimialojen kehitykseen. Todettiin, että teknologisella muutoksella on vahva vaikutus toimialojen muutoksiin, ja että jokainen toimiala seuraa tietynlaista kehitystrajektoria. Yritysten tulee olla tietoisia teknologisen muutoksen nopeudesta ja suunnasta, ja seurata toimialansa kehityksen sääntöjä. Metsäteollisuudessa muutosten radikaali luonne sekä ICT-teknologian nopea kehitys asettavat haasteita liiketoimintaprosessien sähköistämisen kentässä. Empiriaosuudessa luotiin kolme erilaista skenaariota e-busineksen tulevaisuudesta metsäteollisuudessa. Skenaariot perustuvat pääosin aiheen asiantuntijoiden tämän hetkisiin näkemyksiin, joita koottiin skenaariotyöpajassa. Skenaarioiden muodostamisessa yhdistettiin kvalitatiivisia ja kvantitatiivisia elementtejä. Muodostetut kolme skenaariota osoittavat, että e-busineksen vaikutukset tulevaisuudessa nähdään pääosin positiivisina, ja että yritysten tulee kehittyä aktiivisesti ja joustavasti pystyäkseen hyödyntämään sähköisiä ratkaisuja tehokkaasti liiketoiminnassaan.
Resumo:
Convective transport, both pure and combined with diffusion and reaction, can be observed in a wide range of physical and industrial applications, such as heat and mass transfer, crystal growth or biomechanics. The numerical approximation of this class of problemscan present substantial difficulties clue to regions of high gradients (steep fronts) of the solution, where generation of spurious oscillations or smearing should be precluded. This work is devoted to the development of an efficient numerical technique to deal with pure linear convection and convection-dominated problems in the frame-work of convection-diffusion-reaction systems. The particle transport method, developed in this study, is based on using rneshless numerical particles which carry out the solution along the characteristics defining the convective transport. The resolution of steep fronts of the solution is controlled by a special spacial adaptivity procedure. The serni-Lagrangian particle transport method uses an Eulerian fixed grid to represent the solution. In the case of convection-diffusion-reaction problems, the method is combined with diffusion and reaction solvers within an operator splitting approach. To transfer the solution from the particle set onto the grid, a fast monotone projection technique is designed. Our numerical results confirm that the method has a spacial accuracy of the second order and can be faster than typical grid-based methods of the same order; for pure linear convection problems the method demonstrates optimal linear complexity. The method works on structured and unstructured meshes, demonstrating a high-resolution property in the regions of steep fronts of the solution. Moreover, the particle transport method can be successfully used for the numerical simulation of the real-life problems in, for example, chemical engineering.
Resumo:
Informe de l'anàlisi realitzat sobre el nou web de la Biblioteca Virtual de la UOC, mitjançant el mètode de test amb usuaris, per tal d'avaluar el grau d'usabilitat de la nova eina. L'anàlisi s'emmarca en el procés de disseny centrat en l'usuari que s'ha utilitzat per al seu disseny.
Resumo:
In two previous papers [J. Differential Equations, 228 (2006), pp. 530 579; Discrete Contin. Dyn. Syst. Ser. B, 6 (2006), pp. 1261 1300] we have developed fast algorithms for the computations of invariant tori in quasi‐periodic systems and developed theorems that assess their accuracy. In this paper, we study the results of implementing these algorithms and study their performance in actual implementations. More importantly, we note that, due to the speed of the algorithms and the theoretical developments about their reliability, we can compute with confidence invariant objects close to the breakdown of their hyperbolicity properties. This allows us to identify a mechanism of loss of hyperbolicity and measure some of its quantitative regularities. We find that some systems lose hyperbolicity because the stable and unstable bundles approach each other but the Lyapunov multipliers remain away from 1. We find empirically that, close to the breakdown, the distances between the invariant bundles and the Lyapunov multipliers which are natural measures of hyperbolicity depend on the parameters, with power laws with universal exponents. We also observe that, even if the rigorous justifications in [J. Differential Equations, 228 (2006), pp. 530-579] are developed only for hyperbolic tori, the algorithms work also for elliptic tori in Hamiltonian systems. We can continue these tori and also compute some bifurcations at resonance which may lead to the existence of hyperbolic tori with nonorientable bundles. We compute manifolds tangent to nonorientable bundles.
Resumo:
The objective of this work was to combine the advantages of the dried blood spot (DBS) sampling process with the highly sensitive and selective negative-ion chemical ionization tandem mass spectrometry (NICI-MS-MS) to analyze for recent antidepressants including fluoxetine, norfluoxetine, reboxetine, and paroxetine from micro whole blood samples (i.e., 10 microL). Before analysis, DBS samples were punched out, and antidepressants were simultaneously extracted and derivatized in a single step by use of pentafluoropropionic acid anhydride and 0.02% triethylamine in butyl chloride for 30 min at 60 degrees C under ultrasonication. Derivatives were then separated on a gas chromatograph coupled with a triple-quadrupole mass spectrometer operating in negative selected reaction monitoring mode for a total run time of 5 min. To establish the validity of the method, trueness, precision, and selectivity were determined on the basis of the guidelines of the "Société Française des Sciences et des Techniques Pharmaceutiques" (SFSTP). The assay was found to be linear in the concentration ranges 1 to 500 ng mL(-1) for fluoxetine and norfluoxetine and 20 to 500 ng mL(-1) for reboxetine and paroxetine. Despite the small sampling volume, the limit of detection was estimated at 20 pg mL(-1) for all the analytes. The stability of DBS was also evaluated at -20 degrees C, 4 degrees C, 25 degrees C, and 40 degrees C for up to 30 days. Furthermore, the method was successfully applied to a pharmacokinetic investigation performed on a healthy volunteer after oral administration of a single 40-mg dose of fluoxetine. Thus, this validated DBS method combines an extractive-derivative single step with a fast and sensitive GC-NICI-MS-MS technique. Using microliter blood samples, this procedure offers a patient-friendly tool in many biomedical fields such as checking treatment adherence, therapeutic drug monitoring, toxicological analyses, or pharmacokinetic studies.
Resumo:
Brain-computer interfaces (BCIs) are becoming more and more popular as an input device for virtual worlds and computer games. Depending on their function, a major drawback is the mental workload associated with their use and there is significant effort and training required to effectively control them. In this paper, we present two studies assessing how mental workload of a P300-based BCI affects participants" reported sense of presence in a virtual environment (VE). In the first study, we employ a BCI exploiting the P300 event-related potential (ERP) that allows control of over 200 items in a virtual apartment. In the second study, the BCI is replaced by a gaze-based selection method coupled with wand navigation. In both studies, overall performance is measured and individual presence scores are assessed by means of a short questionnaire. The results suggest that there is no immediate benefit for visualizing events in the VE triggered by the BCI and that no learning about the layout of the virtual space takes place. In order to alleviate this, we propose that future P300-based BCIs in VR are set up so as require users to make some inference about the virtual space so that they become aware of it,which is likely to lead to higher reported presence.
Resumo:
Tutkimuksen päätavoite on kehittää pintakäsittelylaitteita valmistavan PK-yrityksen liiketoimintaedellytyksiä suunnittelemalla sille uusi toimintamalli. Sysäyksen kehitysprojektille antoi liiketoiminnan kasvusta johtunut kontrollin pettäminen, mikä aiheutti kannattavuuden kääntymisen negatiiviseksi. Työn keskeisen viitekehyksen muodostavat kolme tekijää; kehitysmenetelmänä liiketoimintaprosessien uudelleenrakentaminen, yritysmuotona projektiliiketoiminta sekä uuden toimintamallin painopisteenä tuotannon ja asennuksen ulkoistaminen. Käytössä olevia liiketoimintaprosesseja analysoitaessa pystyttiin tunnistamaan useita ongelmakohtia yrityksen toiminnassa, joiden pohjalta määriteltiin parannustarpeet ja vaatimukset uusille prosesseille. Yrityksessä oli selkeänä ongelmana yleinen järjestelmällisyyden puute, mikä näkyi määrittelemättöminä liiketoimintaprosesseina ja tästä johtuvina ongelmina yrityksen lähes kaikissa toiminnoissa. Tekninen osaaminen on yrityksessä hyvää, mutta liikkeenjohdollinen puoli kaipaisi kehittämistä. Tyypillisiä ongelmia olivat vaikeasti ennustettavat tuotantoajat ja niistä johtuneet myöhästymiset toimituksissa, valmistusvirheet sekä tuotannon ja asennuksen tehottomuus. Omat vaatimuksensa uusille prosesseille asetti myös siirtyminen ulkoistettuun valmistukseen ja asennukseen. Uusien prosessien myötä keskeisimmät muutokset olivat panostaminen pitkän tähtäimen suunnitteluun, asiakastiedon hallinnan parantaminen, panostaminen alihankkijaverkoston johtamiseen, organisaation sisäisen viestinnän parantaminen ja siirtyminen etätyön mahdollistavaan ”virtuaalitoimistoon”.
Resumo:
This thesis seeks to answer, if communication challenges in virtual teams can be overcome with the help of computer-mediated communication. Virtual teams are becoming more common work method in many global companies. In order for virtual teams to reach their maximum potential, effective asynchronous and synchronous methods for communication are needed. The thesis covers communication in virtual teams, as well as leadership and trust building in virtual environments with the help of CMC. First, the communication challenges in virtual teams are identified by using a framework of knowledge sharing barriers in virtual teams by Rosen et al. (2007) Secondly, the leadership and trust in virtual teams are defined in the context of CMC. The performance of virtual teams is evaluated in the case study by exploiting these three dimensions. With the help of a case study of two virtual teams, the practical issues related to selecting and implementing communication technologies as well as overcoming knowledge sharing barriers is being discussed. The case studies involve a complex inter-organisational setting, where four companies are working together in order to maintain a new IT system. The communication difficulties are related to inadequate communication technologies, lack of trust and the undefined relationships of the stakeholders and the team members. As a result, it is suggested that communication technologies are needed in order to improve the virtual team performance, but are not however solely capable of solving the communication challenges in virtual teams. In addition, suitable leadership and trust between team members are required in order to improve the knowledge sharing and communication in virtual teams.
Resumo:
There is an increasing interest to seek new enzyme preparations for the development of new products derived from bioprocesses to obtain alternative bio-based materials. In this context, four non-commercial lipases from Pseudomonas species were prepared, immobilized on different low-cost supports, and examined for potential biotechnological applications. Results: To reduce costs of eventual scaling-up, the new lipases were obtained directly from crude cell extracts or from growth culture supernatants, and immobilized by simple adsorption on Accurel EP100, Accurel MP1000 and Celite (R) 545. The enzymes evaluated were LipA and LipC from Pseudomonas sp. 42A2, a thermostable mutant of LipC, and LipI. 3 from Pseudomonas CR611, which were produced in either homologous or heterologous hosts. Best immobilization results were obtained on Accurel EP100 for LipA and on Accurel MP1000 for LipC and its thermostable variant. Lip I. 3, requiring a refolding step, was poorly immobilized on all supports tested ( best results for Accurel MP1000). To test the behavior of immobilized lipases, they were assayed in triolein transesterification, where the best results were observed for lipases immobilized on Accurel MP1000. Conclusions: The suggested protocol does not require protein purification and uses crude enzymes immobilized by a fast adsorption technique on low-cost supports, which makes the method suitable for an eventual scaling up aimed at biotechnological applications. Therefore, a fast, simple and economic method for lipase preparation and immobilization has been set up. The low price of the supports tested and the simplicity of the procedure, skipping the tedious and expensive purification steps, will contribute to cost reduction in biotechnological lipase-catalyzed processes.
Resumo:
PURPOSE: We propose the use of a retrospectively gated cine fast spin echo (FSE) sequence for characterization of carotid artery dynamics. The aim of this study was to compare cine FSE measures of carotid dynamics with measures obtained on prospectively gated FSE images. METHODS: The common carotid arteries in 10 volunteers were imaged using two temporally resolved sequences: (i) cine FSE and (ii) prospectively gated FSE. Three raters manually traced a common carotid artery area for all cardiac phases on both sequences. Measured areas and systolic-diastolic area changes were calculated and compared. Inter- and intra-rater reliability were assessed for both sequences. RESULTS: No significant difference between cine FSE and prospectively gated FSE areas were observed (P = 0.36). Both sequences produced repeatable cross-sectional area measurements: inter-rater intraclass correlation coefficient (ICC) = 0.88 on cine FSE images and 0.87 on prospectively gated FSE images. Minimum detectable difference (MDD) in systolic-diastolic area was 4.9 mm(2) with cine FSE and 6.4 mm(2) with prospectively gated FSE. CONCLUSION: This cine FSE method produced repeatable dynamic carotid artery measurements with less artifact and greater temporal efficiency compared with prospectively gated FSE. Magn Reson Med 74:1103-1109, 2015. © 2014 Wiley Periodicals, Inc.
Resumo:
Following their detection and seizure by police and border guard authorities, false identity and travel documents are usually scanned, producing digital images. This research investigates the potential of these images to classify false identity documents, highlight links between documents produced by a same modus operandi or same source, and thus support forensic intelligence efforts. Inspired by previous research work about digital images of Ecstasy tablets, a systematic and complete method has been developed to acquire, collect, process and compare images of false identity documents. This first part of the article highlights the critical steps of the method and the development of a prototype that processes regions of interest extracted from images. Acquisition conditions have been fine-tuned in order to optimise reproducibility and comparability of images. Different filters and comparison metrics have been evaluated and the performance of the method has been assessed using two calibration and validation sets of documents, made up of 101 Italian driving licenses and 96 Portuguese passports seized in Switzerland, among which some were known to come from common sources. Results indicate that the use of Hue and Edge filters or their combination to extract profiles from images, and then the comparison of profiles with a Canberra distance-based metric provides the most accurate classification of documents. The method appears also to be quick, efficient and inexpensive. It can be easily operated from remote locations and shared amongst different organisations, which makes it very convenient for future operational applications. The method could serve as a first fast triage method that may help target more resource-intensive profiling methods (based on a visual, physical or chemical examination of documents for instance). Its contribution to forensic intelligence and its application to several sets of false identity documents seized by police and border guards will be developed in a forthcoming article (part II).
Resumo:
Integrating single nucleotide polymorphism (SNP) p-values from genome-wide association studies (GWAS) across genes and pathways is a strategy to improve statistical power and gain biological insight. Here, we present Pascal (Pathway scoring algorithm), a powerful tool for computing gene and pathway scores from SNP-phenotype association summary statistics. For gene score computation, we implemented analytic and efficient numerical solutions to calculate test statistics. We examined in particular the sum and the maximum of chi-squared statistics, which measure the strongest and the average association signals per gene, respectively. For pathway scoring, we use a modified Fisher method, which offers not only significant power improvement over more traditional enrichment strategies, but also eliminates the problem of arbitrary threshold selection inherent in any binary membership based pathway enrichment approach. We demonstrate the marked increase in power by analyzing summary statistics from dozens of large meta-studies for various traits. Our extensive testing indicates that our method not only excels in rigorous type I error control, but also results in more biologically meaningful discoveries.
Resumo:
Identification of chemical compounds with specific biological activities is an important step in both chemical biology and drug discovery. When the structure of the intended target is available, one approach is to use molecular docking programs to assess the chemical complementarity of small molecules with the target; such calculations provide a qualitative measure of affinity that can be used in virtual screening (VS) to rank order a list of compounds according to their potential to be active. rDock is a molecular docking program developed at Vernalis for high-throughput VS (HTVS) applications. Evolved from RiboDock, the program can be used against proteins and nucleic acids, is designed to be computationally very efficient and allows the user to incorporate additional constraints and information as a bias to guide docking. This article provides an overview of the program structure and features and compares rDock to two reference programs, AutoDock Vina (open source) and Schrodinger's Glide (commercial). In terms of computational speed for VS, rDock is faster than Vina and comparable to Glide. For binding mode prediction, rDock and Vina are superior to Glide. The VS performance of rDock is significantly better than Vina, but inferior to Glide for most systems unless pharmacophore constraints are used; in that case rDock and Glide are of equal performance. The program is released under the Lesser General Public License and is freely available for download, together with the manuals, example files and the complete test sets, at http://rdock.sourceforge.net/