978 resultados para text message analysis and question-answering system
Resumo:
[1] Skylounge Project final report.--[2] Skylounge legal, technical, and financial supplementary study.
Resumo:
Mode of access: Internet.
Resumo:
Mode of access: Internet.
Resumo:
Mobile advertising is a rapidly growing sector providing brands and marketing agencies the opportunity to connect with consumers beyond traditional and digital media and instead communicate directly on their mobile phones. Mobile advertising will be intrinsically linked with mobile search, which has transported from the internet to the mobile and is identified as an area of potential growth. The result of mobile searching show that as a general rule such search result exceed 160 characters; the dialog is required to deliver the relevant portion of a response to the mobile user. In this paper we focus initially on mobile search and mobile advert creation, and later the mechanism of interaction between the user’s request, the result of searching, advertising and dialog.
Resumo:
Mobile phones have the potential of fostering political mobilisation. There is a significant political power in mobile technology. Like the Internet, mobile phones facilitate communication and rapid access to information. Compared to the Internet, however, mobile phone diffusion has reached a larger proportion of the population in most countries, and thus the impact of this new medium is conceivably greater. There are now more mobile phones in the UK than there are people (averaging at 121 mobile phones for every 100 people). In this paper, the attempt to use modern mobile technology to handle the General Election, is discussed. The pre-election advertising, election day issues, including the election news and results as they come in, and answering questions via text message regarding the results of current and/or previous general elections are considered.
Resumo:
Small errors proved catastrophic. Our purpose to remark that a very small cause which escapes our notice determined a considerable effect that we cannot fail to see, and then we say that the effect is due to chance. Small differences in the initial conditions produce very great ones in the final phenomena. A small error in the former will produce an enormous error in the latter. When dealing with any kind of electrical device specification, it is important to note that there exists a pair of test conditions that define a test: the forcing function and the limit. Forcing functions define the external operating constraints placed upon the device tested. The actual test defines how well the device responds to these constraints. Forcing inputs to threshold for example, represents the most difficult testing because this put those inputs as close as possible to the actual switching critical points and guarantees that the device will meet the Input-Output specifications. ^ Prediction becomes impossible by classical analytical analysis bounded by Newton and Euclides. We have found that non linear dynamics characteristics is the natural state of being in all circuits and devices. Opportunities exist for effective error detection in a nonlinear dynamics and chaos environment. ^ Nowadays there are a set of linear limits established around every aspect of a digital or analog circuits out of which devices are consider bad after failing the test. Deterministic chaos circuit is a fact not a possibility as it has been revived by our Ph.D. research. In practice for linear standard informational methodologies, this chaotic data product is usually undesirable and we are educated to be interested in obtaining a more regular stream of output data. ^ This Ph.D. research explored the possibilities of taking the foundation of a very well known simulation and modeling methodology, introducing nonlinear dynamics and chaos precepts, to produce a new error detector instrument able to put together streams of data scattered in space and time. Therefore, mastering deterministic chaos and changing the bad reputation of chaotic data as a potential risk for practical system status determination. ^
Resumo:
Background: Understanding transcriptional regulation by genome-wide microarray studies can contribute to unravel complex relationships between genes. Attempts to standardize the annotation of microarray data include the Minimum Information About a Microarray Experiment (MIAME) recommendations, the MAGE-ML format for data interchange, and the use of controlled vocabularies or ontologies. The existing software systems for microarray data analysis implement the mentioned standards only partially and are often hard to use and extend. Integration of genomic annotation data and other sources of external knowledge using open standards is therefore a key requirement for future integrated analysis systems. Results: The EMMA 2 software has been designed to resolve shortcomings with respect to full MAGE-ML and ontology support and makes use of modern data integration techniques. We present a software system that features comprehensive data analysis functions for spotted arrays, and for the most common synthesized oligo arrays such as Agilent, Affymetrix and NimbleGen. The system is based on the full MAGE object model. Analysis functionality is based on R and Bioconductor packages and can make use of a compute cluster for distributed services. Conclusion: Our model-driven approach for automatically implementing a full MAGE object model provides high flexibility and compatibility. Data integration via SOAP-based web-services is advantageous in a distributed client-server environment as the collaborative analysis of microarray data is gaining more and more relevance in international research consortia. The adequacy of the EMMA 2 software design and implementation has been proven by its application in many distributed functional genomics projects. Its scalability makes the current architecture suited for extensions towards future transcriptomics methods based on high-throughput sequencing approaches which have much higher computational requirements than microarrays.
Resumo:
The thesis is focused on introducing basic MIMO-based and Massive MIMO-based systems and their possible benefits. Then going through the implementation options that we have, according to 3GPP standards, for 5G systems and how the transition is done from a non-standalone 5G RAN to a completely standalone 5G RAN. Having introduced the above-mentioned subjects and providing some definition of telecommunications principles, we move forward to a more technical analysis of the Capacity, Throughput, Power consumption, and Costs. Comparing all the mentioned parameters between a Massive-MIMO-based system and a MIMO-based system. In the analysis of power consumption and costs, we also introduce the concept of virtualization and its benefits in terms of both power and costs. Finally, we try to justify a trade-off between having a more reliable system with a high capacity and throughput while keeping the costs as low as possible.
Resumo:
Nowadays the idea of injecting world or domain-specific structured knowledge into pre-trained language models (PLMs) is becoming an increasingly popular approach for solving problems such as biases, hallucinations, huge architectural sizes, and explainability lack—critical for real-world natural language processing applications in sensitive fields like bioinformatics. One recent work that has garnered much attention in Neuro-symbolic AI is QA-GNN, an end-to-end model for multiple-choice open-domain question answering (MCOQA) tasks via interpretable text-graph reasoning. Unlike previous publications, QA-GNN mutually informs PLMs and graph neural networks (GNNs) on top of relevant facts retrieved from knowledge graphs (KGs). However, taking a more holistic view, existing PLM+KG contributions mainly consider commonsense benchmarks and ignore or shallowly analyze performances on biomedical datasets. This thesis start from a propose of a deep investigation of QA-GNN for biomedicine, comparing existing or brand-new PLMs, KGs, edge-aware GNNs, preprocessing techniques, and initialization strategies. By combining the insights emerged in DISI's research, we introduce Bio-QA-GNN that include a KG. Working with this part has led to an improvement in state-of-the-art of MCOQA model on biomedical/clinical text, largely outperforming the original one (+3.63\% accuracy on MedQA). Our findings also contribute to a better understanding of the explanation degree allowed by joint text-graph reasoning architectures and their effectiveness on different medical subjects and reasoning types. Codes, models, datasets, and demos to reproduce the results are freely available at: \url{https://github.com/disi-unibo-nlp/bio-qagnn}.
Resumo:
High-throughput screening of physical, genetic and chemical-genetic interactions brings important perspectives in the Systems Biology field, as the analysis of these interactions provides new insights into protein/gene function, cellular metabolic variations and the validation of therapeutic targets and drug design. However, such analysis depends on a pipeline connecting different tools that can automatically integrate data from diverse sources and result in a more comprehensive dataset that can be properly interpreted. We describe here the Integrated Interactome System (IIS), an integrative platform with a web-based interface for the annotation, analysis and visualization of the interaction profiles of proteins/genes, metabolites and drugs of interest. IIS works in four connected modules: (i) Submission module, which receives raw data derived from Sanger sequencing (e.g. two-hybrid system); (ii) Search module, which enables the user to search for the processed reads to be assembled into contigs/singlets, or for lists of proteins/genes, metabolites and drugs of interest, and add them to the project; (iii) Annotation module, which assigns annotations from several databases for the contigs/singlets or lists of proteins/genes, generating tables with automatic annotation that can be manually curated; and (iv) Interactome module, which maps the contigs/singlets or the uploaded lists to entries in our integrated database, building networks that gather novel identified interactions, protein and metabolite expression/concentration levels, subcellular localization and computed topological metrics, GO biological processes and KEGG pathways enrichment. This module generates a XGMML file that can be imported into Cytoscape or be visualized directly on the web. We have developed IIS by the integration of diverse databases following the need of appropriate tools for a systematic analysis of physical, genetic and chemical-genetic interactions. IIS was validated with yeast two-hybrid, proteomics and metabolomics datasets, but it is also extendable to other datasets. IIS is freely available online at: http://www.lge.ibi.unicamp.br/lnbio/IIS/.
Resumo:
El Niño South Oscillation (ENSO) is one climatic phenomenon related to the inter-annual variability of global meteorological patterns influencing sea surface temperature and rainfall variability. It influences human health indirectly through extreme temperature and moisture conditions that may accelerate the spread of some vector-borne viral diseases, like dengue fever (DF). This work examines the spatial distribution of association between ENSO and DF in the countries of the Americas during 1995-2004, which includes the 1997-1998 El Niño, one of the most important climatic events of 20(th) century. Data regarding the South Oscillation index (SOI), indicating El Niño-La Niña activity, were obtained from Australian Bureau of Meteorology. The annual DF incidence (AIy) by country was computed using Pan-American Health Association data. SOI and AIy values were standardised as deviations from the mean and plotted in bars-line graphics. The regression coefficient values between SOI and AIy (rSOI,AI) were calculated and spatially interpolated by an inverse distance weighted algorithm. The results indicate that among the five years registering high number of cases (1998, 2002, 2001, 2003 and 1997), four had El Niño activity. In the southern hemisphere, the annual spatial weighted mean centre of epidemics moved southward, from 6° 31' S in 1995 to 21° 12' S in 1999 and the rSOI,AI values were negative in Cuba, Belize, Guyana and Costa Rica, indicating a synchrony between higher DF incidence rates and a higher El Niño activity. The rSOI,AI map allows visualisation of a graded surface with higher values of ENSO-DF associations for Mexico, Central America, northern Caribbean islands and the extreme north-northwest of South America.
Resumo:
The HACCP system is being increasingly used to ensure food safety. This study investigated the validation of the control measures technique in order to establish performance indicators of this HACCP system in the manufacturing process of Lasagna Bolognese (meat lasagna). Samples were collected along the manufacturing process as a whole, before and after the CCPs. The following microorganism s indicator (MIs) was assessed: total mesophile and faecal coliform counts. The same MIs were analyzed in the final product, as well as, the microbiological standards required by the current legislation. A significant reduction in the total mesophile count was observed after cooking (p < 0.001). After storage, there was a numerical, however non-significant change in the MI count. Faecal coliform counts were also significantly reduced (p < 0.001) after cooking. We were able to demonstrate that the HACCP system allowed us to meet the standards set by both, the company and the Brazilian regulations, proved by the reduction in the established indicators
Resumo:
Chaotic dynamical systems with two or more attractors lying on invariant subspaces may, provided certain mathematical conditions are fulfilled, exhibit intermingled basins of attraction: Each basin is riddled with holes belonging to basins of the other attractors. In order to investigate the occurrence of such phenomenon in dynamical systems of ecological interest (two-species competition with extinction) we have characterized quantitatively the intermingled basins using periodic-orbit theory and scaling laws. The latter results agree with a theoretical prediction from a stochastic model, and also with an exact result for the scaling exponent we derived for the specific class of models investigated. We discuss the consequences of the scaling laws in terms of the predictability of a final state (extinction of either species) in an ecological experiment.
Resumo:
The volume of the primary (PCS) and secondary (SCS) circulatory system in the Atlantic cod Gadus morhua was determined using a modified dye dilution technique. Cod (N=10) were chronically cannulated in the second afferent branchial artery with PE-50 tubing. Evans Blue dye was bound to harvested fish plasma at a concentration of 1 mg dye ml(-1) plasma, and injected at a concentration of 1 mg kg(-1) body mass. Serial sampling from the cannula produced a dye dilution curve, which could be described by a double exponential decay equation. Curve analysis enabled the calculation of the primary circulatory and total distribution volume. The difference between these volumes is assumed to be the volume of the SCS. From the dilution curve, it was also possible to calculate flow rates between and within the systems. The results of these experiments suggest a plasma volume in the PCS of 3.42+/-0.89 ml 100 g(-1) body mass, and in the SCS of 1.68+/-0.35 ml 100 g(-1) body mass (mean +/- S.D.) or approximately 50% that of the PCS. Flow rates to the SCS were calculated as 2.7% of the resting cardiac output. There was an allometric relationship between body mass and blood volumes. Increasing condition factor showed a tendency towards smaller blood volumes of the PCS, expressed as percentage body mass, but this was not evident for the volume of the SCS.
Resumo:
The aim of the study is an historical analysis of the work undertaken by the Public Health organizations dedicated to the combat of the Aedes aegypti, as well as an epidemiolocal study of persons with unexplained fever, with a view to evaluating the ocurrence of dengue within the population. The Mac-Elisa, Gac-Elisa, hemaglutination inhibition, isolation and typage tests were used. Organophosphate intoxication in agricultural workers was also assessed by measuring concentrations of serie cholinesterase. A sera samples of 2,094 were collected in 23 towns, and the type 1 dengue virus was detected in 17 towns and autochthony was confirmed in 12 of them. The cholinesterase was measured in 2,391 sera samples of which 53 cases had abnormal levels. Poisoning was confirmed in 3 cases. Results reveal an epidemic the gravity of which was not officially know. The relationshipe between levels of IgM and IgG antibodies indicates the outbreak tendency. The widespread distribution of the vector is troubling because of the possibility of the urbanization of wild yellow fever, whereas the absence of A. aegypti in 2 towns with autochthony suggests the existence of another vector. Since there is no vaccine against dengue, the combat of the vector is the most efficient measure for preventing outbreaks. The eradication of the vector depends on government decisions which depend, for their execution, on the organization of the Health System and the propagation of information concerning the prevention of the disease using all possible means because short and long term results depend on the education and the active participation of the entire population.