493 resultados para Automatized Indexing


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Plasmonen sind die kollektive resonante Anregung von Leitungselektronen. Vom Licht angeregternPlasmonen in subwellenlängen-grossen Nanopartikeln heissen Partikelplasmonen und sind vielversprechende Kandidaten für zukünftige Mikrosensoren wegen der starken Abhängigkeit der Resonanz an extern steuerbaren Parametern, wie die optischen Eigenschaften des umgebenden Mediums und die elektrische Ladung der Nanopartikel. Die extrem hohe Streue_zienz von Partikelplasmonen erlaubt eine einfache Beobachtung einzelner Nanopartikel in einem Mikroskop.rnDie Anforderung, schnell eine statistisch relevante Anzahl von Datenpunkten sammeln zu können,rnund die wachsende Bedeutung von plasmonischen (vor allem Gold-) Nanopartikeln für Anwendungenrnin der Medizin, hat nach der Entwicklung von automatisierten Mikroskopen gedrängt, die im bis dahin nur teilweise abgedeckten spektralen Fenster der biologischen Gewebe (biologisches Fenster) von 650 bis 900nm messen können. Ich stelle in dieser Arbeit das Plasmoscope vor, das genau unter Beobachtung der genannten Anforderungen entworfen wurde, in dem (1) ein einstellbarer Spalt in die Eingangsö_nung des Spektrometers, die mit der Bildebene des Mikroskops zusammenfällt, gesetzt wurde, und (2) einem Piezo Scantisch, der es ermöglicht, die Probe durch diesen schmalen Spalt abzurastern. Diese Verwirklichung vermeidet optische Elemente, die im nahen Infra-Rot absorbieren.rnMit dem Plasmoscope untersuche ich die plasmonische Sensitivität von Gold- und Silbernanostrnäbchen, d.h. die Plasmon-Resonanzverschiebung in Abhängigkeit mit der Änderung des umgebendenrnMediums. Die Sensitivität ist das Mass dafür, wie gut die Nanopartikeln Materialänderungenrnin ihrer Umgebung detektieren können, und damit ist es immens wichtig zu wissen, welche Parameterrndie Sensitivität beein_ussen. Ich zeige hier, dass Silbernanostäbchen eine höhere Sensitivität alsrnGoldnanostäbchen innerhalb des biologischen Fensters besitzen, und darüberhinaus, dass die Sensitivität mit der Dicke der Stäbchen wächst. Ich stelle eine theoretische Diskussion der Sensitivitätrnvor, indenti_ziere die Materialparameter, die die Sensitivität bein_ussen und leite die entsprechendenrnFormeln her. In einer weiteren Annäherung präsentiere ich experimentelle Daten, die die theoretische Erkenntnis unterstützen, dass für Sensitivitätsmessschemata, die auch die Linienbreite mitberücksichtigen, Goldnanostäbchen mit einem Aspektverhältnis von 3 bis 4 das optimalste Ergebnis liefern. Verlässliche Sensoren müssen eine robuste Wiederholbarkeit aufweisen, die ich mit Gold- und Silbernanostäbchen untersuche.rnDie Plasmonen-resonanzwellenlänge hängt von folgenden intrinsischen Materialparametern ab:rnElektrondichte, Hintergrundpolarisierbarkeit und Relaxationszeit. Basierend auf meinen experimentellen Ergebnissen zeige ich, dass Nanostäbchen aus Kupfer-Gold-Legierung im Vergleich zu ähnlich geformten Goldnanostäbchen eine rotverschobene Resonanz haben, und in welcher Weiserndie Linienbreite mit der stochimetrischen Zusammensetzung der legierten Nanopartikeln variiert.rnDie Abhängigkeit der Linienbreite von der Materialzusammensetzung wird auch anhand von silberbeschichteten und unbeschichteten Goldnanostäbchen untersucht.rnHalbleiternanopartikeln sind Kandidaten für e_ziente photovoltaische Einrichtungen. Die Energieumwandlung erfordert eine Ladungstrennung, die mit dem Plasmoscope experimentell vermessen wird, in dem ich die lichtinduzierte Wachstumsdynamik von Goldsphären auf Halbleiternanost äbchen in einer Goldionenlösung durch die Messung der gestreuten Intensität verfolge.rn

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In any terminological study, candidate term extraction is a very time-consuming task. Corpus analysis tools have automatized some processes allowing the detection of relevant data within the texts, facilitating term candidate selection as well. Nevertheless, these tools are (normally) not specific for terminology research; therefore, the units which are automatically extracted need manual evaluation. Over the last few years some software products have been specifically developed for automatic term extraction. They are based on corpus analysis, but use linguistic and statistical information to filter data more precisely. As a result, the time needed for manual evaluation is reduced. In this framework, we tried to understand if and how these new tools can really be an advantage. In order to develop our project, we simulated a terminology study: we chose a domain (i.e. legal framework for medicinal products for human use) and compiled a corpus from which we extracted terms and phraseologisms using AntConc, a corpus analysis tool. Afterwards, we compared our list with the lists extracted automatically from three different tools (TermoStat Web, TaaS e Sketch Engine) in order to evaluate their performance. In the first chapter we describe some principles relating to terminology and phraseology in language for special purposes and show the advantages offered by corpus linguistics. In the second chapter we illustrate some of the main concepts of the domain selected, as well as some of the main features of legal texts. In the third chapter we describe automatic term extraction and the main criteria to evaluate it; moreover, we introduce the term-extraction tools used for this project. In the fourth chapter we describe our research method and, in the fifth chapter, we show our results and draw some preliminary conclusions on the performance and usefulness of term-extraction tools.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Software visualizations can provide a concise overview of a complex software system. Unfortunately, as software has no physical shape, there is no `natural' mapping of software to a two-dimensional space. As a consequence most visualizations tend to use a layout in which position and distance have no meaning, and consequently layout typically diverges from one visualization to another. We propose an approach to consistent layout for software visualization, called Software Cartography, in which the position of a software artifact reflects its vocabulary, and distance corresponds to similarity of vocabulary. We use Latent Semantic Indexing (LSI) to map software artifacts to a vector space, and then use Multidimensional Scaling (MDS) to map this vector space down to two dimensions. The resulting consistent layout allows us to develop a variety of thematic software maps that express very different aspects of software while making it easy to compare them. The approach is especially suitable for comparing views of evolving software, as the vocabulary of software artifacts tends to be stable over time. We present a prototype implementation of Software Cartography, and illustrate its use with practical examples from numerous open-source case studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose Physiological respiratory motion of tumors growing in the lung can be corrected with respiratory gating when treated with radiotherapy (RT). The optimal respiratory phase for beam-on may be assessed with a respiratory phase optimizer (RPO), a 4D image processing software developed with this purpose. Methods and Materials Fourteen patients with lung cancer were included in the study. Every patient underwent a 4D-CT providing ten datasets of ten phases of the respiratory cycle (0-100% of the cycle). We defined two morphological parameters for comparison of 4D-CT images in different respiratory phases: tumor-volume to lung-volume ratio and tumor-to-spinal cord distance. The RPO automatized the calculations (200 per patient) of these parameters for each phase of the respiratory cycle allowing to determine the optimal interval for RT. Results Lower lobe lung tumors not attached to the diaphragm presented with the largest motion with breathing. Maximum inspiration was considered the optimal phase for treatment in 4 patients (28.6%). In 7 patients (50%), however, the RPO showed a most favorable volumetric and spatial configuration in phases other than maximum inspiration. In 2 cases (14.4%) the RPO showed no benefit from gating. This tool was not conclusive in only one case. Conclusions The RPO software presented in this study can help to determine the optimal respiratory phase for gated RT based on a few simple morphological parameters. Easy to apply in daily routine, it may be a useful tool for selecting patients who might benefit from breathing adapted RT.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Case series are a commonly reported study design, but the label "case series" is used inconsistently and sometimes incorrectly. Mislabeling impairs the appropriate indexing and sorting of evidence. This article tries to clarify the concept of case series and proposes a way to distinguish them from cohort studies. In a cohort study, patients are sampled on the basis of exposure and are followed over time, and the occurrence of outcomes is assessed. A cohort study may include a comparison group, although this is not a necessary feature. A case series may be a study that samples patients with both a specific outcome and a specific exposure, or one that samples patients with a specific outcome and includes patients regardless of whether they have specific exposures. Whereas a cohort study, in principle, enables the calculation of an absolute risk or a rate for the outcome, such a calculation is not possible in a case series.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the advent of cheaper and faster DNA sequencing technologies, assembly methods have greatly changed. Instead of outputting reads that are thousands of base pairs long, new sequencers parallelize the task by producing read lengths between 35 and 400 base pairs. Reconstructing an organism’s genome from these millions of reads is a computationally expensive task. Our algorithm solves this problem by organizing and indexing the reads using n-grams, which are short, fixed-length DNA sequences of length n. These n-grams are used to efficiently locate putative read joins, thereby eliminating the need to perform an exhaustive search over all possible read pairs. Our goal was develop a novel n-gram method for the assembly of genomes from next-generation sequencers. Specifically, a probabilistic, iterative approach was utilized to determine the most likely reads to join through development of a new metric that models the probability of any two arbitrary reads being joined together. Tests were run using simulated short read data based on randomly created genomes ranging in lengths from 10,000 to 100,000 nucleotides with 16 to 20x coverage. We were able to successfully re-assemble entire genomes up to 100,000 nucleotides in length.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Obesity is becoming an epidemic phenomenon in most developed countries. The fundamental cause of obesity and overweight is an energy imbalance between calories consumed and calories expended. It is essential to monitor everyday food intake for obesity prevention and management. Existing dietary assessment methods usually require manually recording and recall of food types and portions. Accuracy of the results largely relies on many uncertain factors such as user's memory, food knowledge, and portion estimations. As a result, the accuracy is often compromised. Accurate and convenient dietary assessment methods are still blank and needed in both population and research societies. In this thesis, an automatic food intake assessment method using cameras, inertial measurement units (IMUs) on smart phones was developed to help people foster a healthy life style. With this method, users use their smart phones before and after a meal to capture images or videos around the meal. The smart phone will recognize food items and calculate the volume of the food consumed and provide the results to users. The technical objective is to explore the feasibility of image based food recognition and image based volume estimation. This thesis comprises five publications that address four specific goals of this work: (1) to develop a prototype system with existing methods to review the literature methods, find their drawbacks and explore the feasibility to develop novel methods; (2) based on the prototype system, to investigate new food classification methods to improve the recognition accuracy to a field application level; (3) to design indexing methods for large-scale image database to facilitate the development of new food image recognition and retrieval algorithms; (4) to develop novel convenient and accurate food volume estimation methods using only smart phones with cameras and IMUs. A prototype system was implemented to review existing methods. Image feature detector and descriptor were developed and a nearest neighbor classifier were implemented to classify food items. A reedit card marker method was introduced for metric scale 3D reconstruction and volume calculation. To increase recognition accuracy, novel multi-view food recognition algorithms were developed to recognize regular shape food items. To further increase the accuracy and make the algorithm applicable to arbitrary food items, new food features, new classifiers were designed. The efficiency of the algorithm was increased by means of developing novel image indexing method in large-scale image database. Finally, the volume calculation was enhanced through reducing the marker and introducing IMUs. Sensor fusion technique to combine measurements from cameras and IMUs were explored to infer the metric scale of the 3D model as well as reduce noises from these sensors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Software visualizations can provide a concise overview of a complex software system. Unfortunately, since software has no physical shape, there is no “natural“ mapping of software to a two-dimensional space. As a consequence most visualizations tend to use a layout in which position and distance have no meaning, and consequently layout typical diverges from one visualization to another. We propose a consistent layout for software maps in which the position of a software artifact reflects its \emph{vocabulary}, and distance corresponds to similarity of vocabulary. We use Latent Semantic Indexing (LSI) to map software artifacts to a vector space, and then use Multidimensional Scaling (MDS) to map this vector space down to two dimensions. The resulting consistent layout allows us to develop a variety of thematic software maps that express very different aspects of software while making it easy to compare them. The approach is especially suitable for comparing views of evolving software, since the vocabulary of software artifacts tends to be stable over time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Second Life (SL) is an ideal platform for language learning. It is called a Multi-User Virtual Environment, where users can have varieties of learning experiences in life-like environments. Numerous attempts have been made to use SL as a platform for language teaching and the possibility of SL as a means to promote conversational interactions has been reported. However, the research so far has largely focused on simply using SL without further augmentations for communication between learners or between teachers and learners in a school-like environment. Conversely, not enough attention has been paid to its controllability which builds on the embedded functions in SL. This study, based on the latest theories of second language acquisition, especially on the Task Based Language Teaching and the Interaction Hypothesis, proposes to design and implement an automatized interactive task space (AITS) where robotic agents work as interlocutors of learners. This paper presents a design that incorporates the SLA theories into SL and the implementation method of the design to construct AITS, fulfilling the controllability of SL. It also presents the result of the evaluation experiment conducted on the constructed AITS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

On 14 November 2013, the US District Court of the Southern District of New York issued a major ruling in favour of the Google Books project, concluding that Google’s unauthorized scanning and indexing of millions of copyrighted books in the collections of participating libraries and subsequently making snippets of these works available online through the “Google Books” search tool qualifies as a fair use under section 107 USCA. After assuming that Google’s actions constitute a prima facie case of copyright infringement, Judge Chin examined the four factors in section 107 USCA and concluded in favour of fair use on the grounds that the project provides “significant public benefits,” that the unauthorized use of copyrighted works (a search tool of scanned full-text books) is “highly transformative” and that it does not supersede or supplant these works. The fair use defence also excluded Google’s liability for making copies of scanned books available to the libraries (as well as under secondary liability since library actions were also found to be protected by fair use): it is aimed at enhancing lawful uses of the digitized books by the libraries for the advancement of the arts and sciences. A previous ruling by the same court of 22 March 2011 had rejected a settlement agreement proposed by the parties, on the grounds that it was “not fair, adequate, and reasonable”. The Authors Guild has appealed the ruling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tegrity Campus 2.0 is the first student achievement system that impacts learning across the entire institution, improving retention and student satisfaction. Tegrity makes class time available all the time by automatically capturing, storing and indexing every class on campus for replay by every student. With Tegrity, students quickly recall key moments or replay entire classes online, with digital notes, on their iPods and cell phones. [See PDF for complete abstract]

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When evaluated for promotion or tenure, faculty members are increasingly judged more on the quality than on the quantity of their scholarly publications. As a result, they want help from librarians in locating all citations to their published works for documentation in their curriculum vitae. Citation analysis using Science Citation Index and Social Science Citation Index provides a logical starting point in measuring quality, but the limitations of these sources leave a void in coverage of citations to an author's work. This article discusses alternative and additional methods of locating citations to published works.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It has been repeatedly demonstrated that athletes often choke in high pressure situations because anxiety can affect attention regulation and in turn performance. There are two competing theoretical approaches to explain the negative anxiety-performance relationship. According to skillfocus theories, anxious athletes’ attention is directed at how to execute the sport-specific movements which interrupts execution of already automatized movements in expert performers. According to distraction theories, anxious athletes are distractible and focus less on the relevant stimuli. We tested these competing assumptions in a between-subject design, as semi-professional tennis players were either assigned to an anxiety group (n = 25) or a neutral group (n = 28), and performed a series of second tennis serves into predefined target areas. As expected, anxiety was negatively related to serve accuracy. However, mediation analyses with the bootstrapping method revealed that this relationship was fully mediated by self-reported distraction and not by skill-focus.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Childhood traumatic events may lead to long-lasting psychological effects and contribute to the development of complex posttraumatic sequelae. These might be captured by the diagnostic concept of complex posttraumatic stress disorder (CPTSD) as an alternative to classic posttraumatic stress disorder (PTSD). CPTSD comprises a further set of symptoms in addition to those of PTSD, namely, changes in affect, self, and interpersonal relationships. Previous empirical research on CPTSD has focused on middle-aged adults but not on older adults. Moreover, predictor models of CPTSD are still rare. The current study investigated the association between traumatic events in childhood and complex posttraumatic stress symptoms in older adults. The mediation of this association by 2 social-interpersonal factors (social acknowledgment as a survivor and dysfunctional disclosure) was investigated. These 2 factors focus on the perception of acknowledgment by others and either the inability to disclose traumatic experiences or the ability to do so only with negative emotional reactions. A total of 116 older individuals (age range = 59–98 years) who had experienced childhood traumatic events completed standardized self-report questionnaires indexing childhood trauma, complex trauma sequelae, social acknowledgment, and dysfunctional disclosure of trauma. The results showed that traumatic events during childhood were associated with later posttraumatic stress symptoms but with classic rather than complex symptoms. Social acknowledgment and dysfunctional disclosure partially mediated this relationship. These findings suggest that childhood traumatic stress impacts individuals across the life span and may be associated with particular adverse psychopathological consequences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

High-pressure powder X-ray diffraction is a fundamental technique for investigating structural responses to externally applied force. Synchrotron sources and two-dimensional detectors are required. In contrast to this conventional setup, high-resolution beamlines equipped with one-dimensional detectors could offer much better resolved peaks but cannot deliver accurate structure factors because they only sample a small portion of the Debye rings, which are usually inhomogeneous and spotty because of the small amount of sample. In this study, a simple method to overcome this problem is presented and successfully applied to solving the structure of an L-serine polymorph from powder data. A comparison of the obtained high-resolution high-pressure data with conventional data shows that this technique, providing up to ten times better angular resolution, can be of advantage for indexing, for lattice parameter refinement, and even for structure refinement and solution in special cases.