969 resultados para Computer generated works
Resumo:
We have studied how leaders emerge in a group as a consequence of interactions among its members. We propose that leaders can emerge as a consequence of a self-organized process based on local rules of dyadic interactions among individuals. Flocks are an example of self-organized behaviour in a group and properties similar to those observed in flocks might also explain some of the dynamics and organization of human groups. We developed an agent-based model that generated flocks in a virtual world and implemented it in a multi-agent simulation computer program that computed indices at each time step of the simulation to quantify the degree to which a group moved in a coordinated way (index of flocking behaviour) and the degree to which specific individuals led the group (index of hierarchical leadership). We ran several series of simulations in order to test our model and determine how these indices behaved under specific agent and world conditions. We identified the agent, world property, and model parameters that made stable, compact flocks emerge, and explored possible environmental properties that predicted the probability of becoming a leader.
Resumo:
We study the interaction of vector mesons with the octet of stable baryons in the framework of the local hidden gauge formalism using a coupled channels unitary approach. We examine the scattering amplitudes and their poles, which can be associated to known J P = 1/2- , 3/2- baryon resonances, in some cases, or give predictions in other ones. The formalism employed produces doublets of degenerate J P = 1/2- , 3/2- states, a pattern which is observed experimentally in several cases. The findings of this work should also be useful to guide present experimental programs searching for new resonances, in particular in the strange sector where the current information is very poor.
Resumo:
We present an algorithm for the computation of reducible invariant tori of discrete dynamical systems that is suitable for tori of dimensions larger than 1. It is based on a quadratically convergent scheme that approximates, at the same time, the Fourier series of the torus, its Floquet transformation, and its Floquet matrix. The Floquet matrix describes the linearization of the dynamics around the torus and, hence, its linear stability. The algorithm presents a high degree of parallelism, and the computational effort grows linearly with the number of Fourier modes needed to represent the solution. For these reasons it is a very good option to compute quasi-periodic solutions with several basic frequencies. The paper includes some examples (flows) to show the efficiency of the method in a parallel computer. In these flows we compute invariant tori of dimensions up to 5, by taking suitable sections.
Resumo:
Tutkimus ”Ilmarisen Suomi” ja sen tekijät tarjoaa uutta tietoa ja historiallisen tulkinnan huipputeknologisen Suomen rakentamisesta sodanjälkeisenä aikana. Kirja kertoo ESKO-tietokoneen tekijöiden monipuolisesta toiminnasta sekä koneen kohtalosta 1950-luvulla. ESKOa rakennuttanut Matematiikkakonekomitea (19541960) suunnitteli laitteesta Suomen ensimmäistä tietokonetta, mutta kirjassa esitetyn tulkinnan mukaan komitealla oli myös laajempia, kansallisia ihanteita ja tavoitteita, kuten kansallisen keskuslaskutoimiston perustaminen. Varhaisia tietokoneita kutsuttiin niiden käyttöä kuvaavasti matematiikkakoneiksi. Kirja on ensimmäinen perusteellinen esitys ja samalla ensimmäinen tutkimus ESKOsta ja sen tekijöiden hankkeesta 1950-luvulla. Matematiikkakonekomitean johdossa toimivat aikansa huipputiedemiehet Rolf Nevanlinna ja Erkki Laurila. Väitöstutkimuksessa kysytään, miten maan ensimmäisen tietokoneen hankkimista perusteltiin, mitä Matematiikkakonekomitea oikeastaan teki ja millaisia erityisesti kansallisia motiiveja koneen tekijöiden toiminta ilmaisi. Tutkimuksessa käytetään monipuolisesti arkistoaineistoa, kirjallisuutta ja haastatteluja Suomesta, Saksasta ja Ruotsista. Tarkastelussa hyödynnetään erityisesti teknologian historian ja yhteiskuntatieteellisen tieteen- ja teknologiantutkimuksen tutkimuskirjallisuutta. Kirjassa tarkastellaan yksityiskohtaisesti sitä, miten ESKOn tekijät yhdistivät tekniikan ja kansalliset perustelut sekä rakensivat uudenlaista, teknisesti taitavaa ”Ilmarisen Suomea” yhdessä ja kilvan muiden tahojen kanssa tuottaen teknologiasta kansallista projektia suomalaisille. Matematiikkakonekomitean ja ESKO-hankkeen tutkimisen perusteella suomalaisten ja teknologian suhteesta voidaan sanoa, että tekniikasta ei vain tullut kansallinen asia suomalaisille, vaan tekniikasta nimenomaan tehtiin kansallinen projekti, joka ei suinkaan ollut erityisen yksimielinen edes sodanjälkeisenä aikana. Tutkimuksen mukaan kotimainen komitea sai paljon aikaan ja tuotti vielä merkittävämpiä seurauksia. Näin siitä huolimatta, että ESKO valmistui pahasti myöhässä, vuonna 1960. Komitea myötävaikutti niin IBM:n menestykseen Suomessa, valtiojohtoisen tiedepolitiikan alkuun kuin Nokian edeltäjän Kaapelitehtaan elektroniikkaosaamisen syntyyn.
Resumo:
This thesis seeks to answer, if communication challenges in virtual teams can be overcome with the help of computer-mediated communication. Virtual teams are becoming more common work method in many global companies. In order for virtual teams to reach their maximum potential, effective asynchronous and synchronous methods for communication are needed. The thesis covers communication in virtual teams, as well as leadership and trust building in virtual environments with the help of CMC. First, the communication challenges in virtual teams are identified by using a framework of knowledge sharing barriers in virtual teams by Rosen et al. (2007) Secondly, the leadership and trust in virtual teams are defined in the context of CMC. The performance of virtual teams is evaluated in the case study by exploiting these three dimensions. With the help of a case study of two virtual teams, the practical issues related to selecting and implementing communication technologies as well as overcoming knowledge sharing barriers is being discussed. The case studies involve a complex inter-organisational setting, where four companies are working together in order to maintain a new IT system. The communication difficulties are related to inadequate communication technologies, lack of trust and the undefined relationships of the stakeholders and the team members. As a result, it is suggested that communication technologies are needed in order to improve the virtual team performance, but are not however solely capable of solving the communication challenges in virtual teams. In addition, suitable leadership and trust between team members are required in order to improve the knowledge sharing and communication in virtual teams.
Resumo:
The caspase-3/p120 RasGAP module acts as a stress sensor that promotes pro-survival or pro-death signaling depending on the intensity and the duration of the stressful stimuli. Partial cleavage of p120 RasGAP generates a fragment, called fragment N, which protects stressed cells by activating Akt signaling. Akt family members regulate many cellular processes including proliferation, inhibition of apoptosis and metabolism. These cellular processes are regulated by three distinct Akt isoforms: Akt1, Akt2 and Akt3. However, which of these isoforms are required for fragment N mediated protection have not been defined. In this study, we investigated the individual contribution of each isoform in fragment N-mediated cell protection against Fas ligand induced cell death. To this end, DLD1 and HCT116 isogenic cell lines lacking specific Akt isoforms were used. It was found that fragment N could activate Akt1 and Akt2 but that only the former could mediate the protective activity of the RasGAP-derived fragment. Even overexpression of Akt2 or Akt3 could not rescue the inability of fragment N to protect cells lacking Akt1. These results demonstrate a strict Akt isoform requirement for the anti-apoptotic activity of fragment N.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
Modelling the shoulder's musculature is challenging given its mechanical and geometric complexity. The use of the ideal fibre model to represent a muscle's line of action cannot always faithfully represent the mechanical effect of each muscle, leading to considerable differences between model-estimated and in vivo measured muscle activity. While the musculo-tendon force coordination problem has been extensively analysed in terms of the cost function, only few works have investigated the existence and sensitivity of solutions to fibre topology. The goal of this paper is to present an analysis of the solution set using the concepts of torque-feasible space (TFS) and wrench-feasible space (WFS) from cable-driven robotics. A shoulder model is presented and a simple musculo-tendon force coordination problem is defined. The ideal fibre model for representing muscles is reviewed and the TFS and WFS are defined, leading to the necessary and sufficient conditions for the existence of a solution. The shoulder model's TFS is analysed to explain the lack of anterior deltoid (DLTa) activity. Based on the analysis, a modification of the model's muscle fibre geometry is proposed. The performance with and without the modification is assessed by solving the musculo-tendon force coordination problem for quasi-static abduction in the scapular plane. After the proposed modification, the DLTa reaches 20% of activation.
Resumo:
Assisted reproductive technologies (ART) induce vascular dysfunction in humans and mice. In mice, ART-induced vascular dysfunction is related to epigenetic alteration of the endothelial nitric oxide synthase (eNOS) gene, resulting in decreased vascular eNOS expression and nitrite/nitrate synthesis. Melatonin is involved in epigenetic regulation, and its administration to sterile women improves the success rate of ART. We hypothesized that addition of melatonin to culture media may prevent ART-induced epigenetic and cardiovascular alterations in mice. We, therefore, assessed mesenteric-artery responses to acetylcholine and arterial blood pressure, together with DNA methylation of the eNOS gene promoter in vascular tissue and nitric oxide plasma concentration in 12-wk-old ART mice generated with and without addition of melatonin to culture media and in control mice. As expected, acetylcholine-induced mesenteric-artery dilation was impaired (P = 0.008 vs. control) and mean arterial blood pressure increased (109.5 ± 3.8 vs. 104.0 ± 4.7 mmHg, P = 0.002, ART vs. control) in ART compared with control mice. These alterations were associated with altered DNA methylation of the eNOS gene promoter (P < 0.001 vs. control) and decreased plasma nitric oxide concentration (10.1 ± 11.1 vs. 29.5 ± 8.0 μM) (P < 0.001 ART vs. control). Addition of melatonin (10(-6) M) to culture media prevented eNOS dysmethylation (P = 0.005, vs. ART + vehicle), normalized nitric oxide plasma concentration (23.1 ± 14.6 μM, P = 0.002 vs. ART + vehicle) and mesentery-artery responsiveness to acetylcholine (P < 0.008 vs. ART + vehicle), and prevented arterial hypertension (104.6 ± 3.4 mmHg, P < 0.003 vs. ART + vehicle). These findings provide proof of principle that modification of culture media prevents ART-induced vascular dysfunction. We speculate that this approach will also allow preventing ART-induced premature atherosclerosis in humans.
Resumo:
Following their detection and seizure by police and border guard authorities, false identity and travel documents are usually scanned, producing digital images. This research investigates the potential of these images to classify false identity documents, highlight links between documents produced by a same modus operandi or same source, and thus support forensic intelligence efforts. Inspired by previous research work about digital images of Ecstasy tablets, a systematic and complete method has been developed to acquire, collect, process and compare images of false identity documents. This first part of the article highlights the critical steps of the method and the development of a prototype that processes regions of interest extracted from images. Acquisition conditions have been fine-tuned in order to optimise reproducibility and comparability of images. Different filters and comparison metrics have been evaluated and the performance of the method has been assessed using two calibration and validation sets of documents, made up of 101 Italian driving licenses and 96 Portuguese passports seized in Switzerland, among which some were known to come from common sources. Results indicate that the use of Hue and Edge filters or their combination to extract profiles from images, and then the comparison of profiles with a Canberra distance-based metric provides the most accurate classification of documents. The method appears also to be quick, efficient and inexpensive. It can be easily operated from remote locations and shared amongst different organisations, which makes it very convenient for future operational applications. The method could serve as a first fast triage method that may help target more resource-intensive profiling methods (based on a visual, physical or chemical examination of documents for instance). Its contribution to forensic intelligence and its application to several sets of false identity documents seized by police and border guards will be developed in a forthcoming article (part II).
Resumo:
The adult dentate gyrus produces new neurons that morphologically and functionally integrate into the hippocampal network. In the adult brain, most excitatory synapses are ensheathed by astrocytic perisynaptic processes that regulate synaptic structure and function. However, these processes are formed during embryonic or early postnatal development and it is unknown whether astrocytes can also ensheathe synapses of neurons born during adulthood and, if so, whether they play a role in their synaptic transmission. Here, we used a combination of serial-section immuno-electron microscopy, confocal microscopy, and electrophysiology to examine the formation of perisynaptic processes on adult-born neurons. We found that the afferent and efferent synapses of newborn neurons are ensheathed by astrocytic processes, irrespective of the age of the neurons or the size of their synapses. The quantification of gliogenesis and the distribution of astrocytic processes on synapses formed by adult-born neurons suggest that the majority of these processes are recruited from pre-existing astrocytes. Furthermore, the inhibition of astrocytic glutamate re-uptake significantly reduced postsynaptic currents and increased paired-pulse facilitation in adult-born neurons, suggesting that perisynaptic processes modulate synaptic transmission on these cells. Finally, some processes were found intercalated between newly formed dendritic spines and potential presynaptic partners, suggesting that they may also play a structural role in the connectivity of new spines. Together, these results indicate that pre-existing astrocytes remodel their processes to ensheathe synapses of adult-born neurons and participate to the functional and structural integration of these cells into the hippocampal network.
Resumo:
Objectives: The present study evaluates the reliability of the Radio Memory® software (Radio Memory; Belo Horizonte,Brasil.) on classifying lower third molars, analyzing intra- and interexaminer agreement of the results. Study Design: An observational, descriptive study of 280 lower third molars was made. The corresponding orthopantomographs were analyzed by two examiners using the Radio Memory® software. The exam was repeated 30 days after the first observation by each examiner. Both intra- and interexaminer agreement were determined using the SPSS v 12.0 software package for Windows (SPSS; Chicago, USA). Results: Intra- and interexaminer agreement was shown for both the Pell & Gregory and the Winter classifications, p<0.01, with 99% significant correlation between variables in all the cases. Conclusions: The use of Radio Memory® software for the classification of lower third molars is shown to be a valid alternative to the conventional method (direct evaluation on the orthopantomograph), for both clinical and investigational applications.
Resumo:
This manual describes how to run the new produced GUI C++ program that so called'WM' program. Section two describes the instructions of the program installation.Section three illustrates test runs description including running the program WM,sample of the input, output files, in addition to some generated graphs followed by the main form of the program created by using the Borland C++ Builder 6.