943 resultados para Problems of Computer Intellectualization
Resumo:
We consider a mechanical problem concerning a 2D axisymmetric body moving forward on the plane and making slow turns of fixed magnitude about its axis of symmetry. The body moves through a medium of non-interacting particles at rest, and collisions of particles with the body's boundary are perfectly elastic (billiard-like). The body has a blunt nose: a line segment orthogonal to the symmetry axis. It is required to make small cavities with special shape on the nose so as to minimize its aerodynamic resistance. This problem of optimizing the shape of the cavities amounts to a special case of the optimal mass transfer problem on the circle with the transportation cost being the squared Euclidean distance. We find the exact solution for this problem when the amplitude of rotation is smaller than a fixed critical value, and give a numerical solution otherwise. As a by-product, we get explicit description of the solution for a class of optimal transfer problems on the circle.
Resumo:
The main objective of this work was to develop an application capable of determining the diffusion times and diffusion coefficients of optical clearing agents and water inside a known type of muscle. Different types of chemical agents can also be used with the method implemented, such as medications or metabolic products. Since the diffusion times can be calculated, it is possible to describe the dehydration mechanism that occurs in the muscle. The calculation of the diffusion time of an optical clearing agent allows to characterize the refractive index matching mechanism of optical clearing. By using both the diffusion times and diffusion of water and clearing agents not only the optical clearing mechanisms are characterized, but also information about optical clearing effect duration and magnitude is obtained. Such information is crucial to plan a clinical intervention in cooperation with optical clearing. The experimental method and equations implemented in the developed application are described in throughout this document, demonstrating its effectiveness. The application was developed in MATLAB code, but the method was personalized so it better fits the application needs. This process significantly improved the processing efficiency, reduced the time to obtain he results, multiple validations prevents common errors and some extra functionalities were added such as saving application progress or export information in different formats. Tests were made using glucose measurements in muscle. Some of the data, for testing purposes, was also intentionally changed in order to obtain different simulations and results from the application. The entire project was validated by comparing the calculated results with the ones found in literature, which are also described in this document.
Resumo:
In the first part of the study the types of barriers to tourism development that may occur during the planning phase of this development, and in the phase of implementation of these plans, including the endogenous and exogenous barriers, were presented. The second part presents the results of research on the factors hindering the development of tourism identified in the selected region of Wielkopolska Province (Poland). The article presents detailed description of tourism barriers categories, which include: political and legal, economic, infrastructure, social, geographical and organizational problems. In the final part article presents a difference in the understanding of problems depending on the stakeholder groups, which leads to the conclusion that in order to be able to specifically identify problematic issues opinion of different stakeholders categories should be recognized. Only such action can lead to the construction of the development strategy, which will not have any areas of uncertainty (i.e. «gaps» in the identifying problem areas).
Resumo:
We obtain a generalized Euler–Lagrange differential equation and transversality optimality conditions for Herglotz-type higher-order variational problems. Illustrative examples of the new results are given.
Resumo:
Introduction Therapeutic commitment of general nurses influences their provision of mental health care to clients. It is the general nurses’ predisposition for working therapeutically with clients who have mental health problems (MHPs). In Malawi, general nurses are the majority of health care professionals who care for people living with HIV/AIDS (PLWHA) and they are expected to deal with the mental health problems of these patients. The provision of mental health care to PLWHA is vital because apart from the physical illnesses associated with the virus, these people are also affected by mental health problems. However, most general nurses, feel neither confident nor competent when dealing with the mental health problems of their clients in Malawi. This may negatively influence their therapeutic commitment in dealing with mental health problems of PLWHA. However, therapeutic commitment of general nurses in providing mental health care to PLWHA in Malawi remains unknown. Materials and Methods The study used a quantitative descriptive survey design. a convenient sample comprising of 136 general nurses was used and data was collected using Mental Health Problems Perception Questionnaire. Permission to use the tool in this study was granted by Prof. Lauder. Ethical approval to conduct the study was granted by Ethics Committees at University of KwaZulu Natal and University of Malawi. Data were analysed using Statistical Package for Social Sciences version 15.0. Results The study findings revealed that there is a linear relationship between general nurses’ levels of knowledge and skills and their therapeutic commitment (r=.40, n=136, p<.05) to provide mental health care of PLWHA. Conclusion This study suggests general nurses’ levels of therapeutic commitment in dealing with MHPs of PLWHA vary and their levels of knowledge and skill to deal with MHPs influence their willingness to provide mental health care to PLWHA.
Resumo:
Le système éducatif encourage une histoire positiviste, ordonnée, unilatérale et universelle; par l´incorporation de le découpage chronologique de l´histoire en quatre étapes. Mais, est-ce qu´il serait posible que les élèves puissent étudier leur propre présent? Mon commuication poursuit d´exposer, comme Saab affirmait, le présent est “le point de départ et d´arrivée de l´enseignement de l´histoire détermine les allers et les retours au passé”. La façon d´approcher l´enseignement de l´histoire est confortable. Il n´y a pas de questions, il n´y a pas de discussions. Cette vision de l´histoire interprétée par l´homme blancoccidental-hétérosexuel s´inscrit dans le projet de la modernité du Siècle des Lumières. Par conséquent, cette histoire obvie que nous vivons dans una société postmoderne de la suspicion, de la pensée débile. En ce qui concerne la problématique autour de la pollution audiovisuelle et la façon dont les enseignants et les élèves sont quotidiennement confrontés à ce problème. Par conséquent, il est nécessaire de réfléchir à la question de l´enseignement de l´histoire quadripartite. Actuellement, les médias et les nouvelles technologies sont en train de changer la vie de l´humanité. Il est indispensable que l´élève connaisse son histoire presente et les scénarioshistoriques dans l´avenir. Je pense en la nécessité d´adopter une didactique de l’histoire presente et par conséquent, nous devons utiliser la maîtrise des médias et de l´information. Il faut une formation des enseignants que pose, comme Gadamer a dit: “le passé y le présent se trouvent par une négociation permanente”. Una formation des enseignants qui permette de comprendre et penser l´histoire future / les histoires futures. À mon avis, si les élèves comprennent la complexité de leur monde et leurs multiples visions, les élèves seront plus tolérantes et empathiques.
Resumo:
The main objectives of this thesis are to validate an improved principal components analysis (IPCA) algorithm on images; designing and simulating a digital model for image compression, face recognition and image detection by using a principal components analysis (PCA) algorithm and the IPCA algorithm; designing and simulating an optical model for face recognition and object detection by using the joint transform correlator (JTC); establishing detection and recognition thresholds for each model; comparing between the performance of the PCA algorithm and the performance of the IPCA algorithm in compression, recognition and, detection; and comparing between the performance of the digital model and the performance of the optical model in recognition and detection. The MATLAB © software was used for simulating the models. PCA is a technique used for identifying patterns in data and representing the data in order to highlight any similarities or differences. The identification of patterns in data of high dimensions (more than three dimensions) is too difficult because the graphical representation of data is impossible. Therefore, PCA is a powerful method for analyzing data. IPCA is another statistical tool for identifying patterns in data. It uses information theory for improving PCA. The joint transform correlator (JTC) is an optical correlator used for synthesizing a frequency plane filter for coherent optical systems. The IPCA algorithm, in general, behaves better than the PCA algorithm in the most of the applications. It is better than the PCA algorithm in image compression because it obtains higher compression, more accurate reconstruction, and faster processing speed with acceptable errors; in addition, it is better than the PCA algorithm in real-time image detection due to the fact that it achieves the smallest error rate as well as remarkable speed. On the other hand, the PCA algorithm performs better than the IPCA algorithm in face recognition because it offers an acceptable error rate, easy calculation, and a reasonable speed. Finally, in detection and recognition, the performance of the digital model is better than the performance of the optical model.
Resumo:
The lack of analytical models that can accurately describe large-scale networked systems makes empirical experimentation indispensable for understanding complex behaviors. Research on network testbeds for testing network protocols and distributed services, including physical, emulated, and federated testbeds, has made steady progress. Although the success of these testbeds is undeniable, they fail to provide: 1) scalability, for handling large-scale networks with hundreds or thousands of hosts and routers organized in different scenarios, 2) flexibility, for testing new protocols or applications in diverse settings, and 3) inter-operability, for combining simulated and real network entities in experiments. This dissertation tackles these issues in three different dimensions. First, we present SVEET, a system that enables inter-operability between real and simulated hosts. In order to increase the scalability of networks under study, SVEET enables time-dilated synchronization between real hosts and the discrete-event simulator. Realistic TCP congestion control algorithms are implemented in the simulator to allow seamless interactions between real and simulated hosts. SVEET is validated via extensive experiments and its capabilities are assessed through case studies involving real applications. Second, we present PrimoGENI, a system that allows a distributed discrete-event simulator, running in real-time, to interact with real network entities in a federated environment. PrimoGENI greatly enhances the flexibility of network experiments, through which a great variety of network conditions can be reproduced to examine what-if questions. Furthermore, PrimoGENI performs resource management functions, on behalf of the user, for instantiating network experiments on shared infrastructures. Finally, to further increase the scalability of network testbeds to handle large-scale high-capacity networks, we present a novel symbiotic simulation approach. We present SymbioSim, a testbed for large-scale network experimentation where a high-performance simulation system closely cooperates with an emulation system in a mutually beneficial way. On the one hand, the simulation system benefits from incorporating the traffic metadata from real applications in the emulation system to reproduce the realistic traffic conditions. On the other hand, the emulation system benefits from receiving the continuous updates from the simulation system to calibrate the traffic between real applications. Specific techniques that support the symbiotic approach include: 1) a model downscaling scheme that can significantly reduce the complexity of the large-scale simulation model, resulting in an efficient emulation system for modulating the high-capacity network traffic between real applications; 2) a queuing network model for the downscaled emulation system to accurately represent the network effects of the simulated traffic; and 3) techniques for reducing the synchronization overhead between the simulation and emulation systems.
Resumo:
A purpose of this research study was to demonstrate the practical linguistic study and evaluation of dissertations by using two examples of the latest technology, the microcomputer and optical scanner. That involved developing efficient methods for data entry plus creating computer algorithms appropriate for personal, linguistic studies. The goal was to develop a prototype investigation which demonstrated practical solutions for maximizing the linguistic potential of the dissertation data base. The mode of text entry was from a Dest PC Scan 1000 Optical Scanner. The function of the optical scanner was to copy the complete stack of educational dissertations from the Florida Atlantic University Library into an I.B.M. XT microcomputer. The optical scanner demonstrated its practical value by copying 15,900 pages of dissertation text directly into the microcomputer. A total of 199 dissertations or 72% of the entire stack of education dissertations (277) were successfully copied into the microcomputer's word processor where each dissertation was analyzed for a variety of syntax frequencies. The results of the study demonstrated the practical use of the optical scanner for data entry, the microcomputer for data and statistical analysis, and the availability of the college library as a natural setting for text studies. A supplemental benefit was the establishment of a computerized dissertation corpus which could be used for future research and study. The final step was to build a linguistic model of the differences in dissertation writing styles by creating 7 factors from 55 dependent variables through principal components factor analysis. The 7 factors (textual components) were then named and described on a hypothetical construct defined as a continuum from a conversational, interactional style to a formal, academic writing style. The 7 factors were then grouped through discriminant analysis to create discriminant functions for each of the 7 independent variables. The results indicated that a conversational, interactional writing style was associated with more recent dissertations (1972-1987), an increase in author's age, females, and the department of Curriculum and Instruction. A formal, academic writing style was associated with older dissertations (1972-1987), younger authors, males, and the department of Administration and Supervision. It was concluded that there were no significant differences in writing style due to subject matter (community college studies) compared to other subject matter. It was also concluded that there were no significant differences in writing style due to the location of dissertation origin (Florida Atlantic University, University of Central Florida, Florida International University).
Resumo:
The very nature of computer science with its constant changes forces those who wish to follow to adapt and react quickly. Large companies invest in being up to date in order to generate revenue and stay active on the market. Universities, on the other hand, need to imply same practices of staying up to date with industry needs in order to produce industry ready engineers. By interviewing former students, now engineers in the industry, and current university staff this thesis aims to learn if there is space for enhancing the education through different lecturing approaches and/or curriculum adaptation and development. In order to address these concerns a qualitative research has been conducted, focusing on data collection obtained through semi-structured live world interviews. The method used follows the seven stages of research interviewing introduced by Kvale and focuses on collecting and preparing relevant data for analysis. The collected data is transcribed, refined, and further on analyzed in the “Findings and analysis” chapter. The focus of analyzing was answering the three research questions; learning how higher education impacts a Computer Science and Informatics Engineers’ job, how to better undergo the transition from studies to working in the industry and how to develop a curriculum that helps support the previous two. Unaltered quoted extracts are presented and individually analyzed. To paint a better picture a theme-wise analysis is presented summing valuable themes that were repeated throughout the interviewing phase. The findings obtained imply that there are several factors directly influencing the quality of education. From the student side, it mostly concerns expectation and dedication involving studies, and from the university side it is commitment to the curriculum development process. Due to the time and resource limitations this research provides findings conducted on a narrowed scope, although it can serve as a great foundation for further development; possibly as a PhD research.
Resumo:
Introduction: human aging is marked by a decrease in the performance of some daily tasks, some even considered banal and imperceptibly when this limitation is followed by chronic diseases, the elderly becomes a source of concern for the family. Objective: identifying the health problems of the elderly living in long-stay institutions from self-reported diseases. This is a descriptive and quantitative study, conducted in northeastern Brazil capital, involving 138 elderly. For data collection we used a questionnaire containing demographic variables, institutional and related to self-reported health problems. Data were evaluated using bivariate analysis and association chi-square. Results: predominance of women was found (61.6%), aged 60-69 years old (39.1%), coming from the state capital (51.4%), and institutional permanence time between 1-5 years (77.5%). The most frequent diseases were related to the cardiovascular system (15.9%) and endocrine, nutritional and metabolic diseases (9.4%). It showed a significant association between self-reported diseases and the age of the elderly (p=0.047). Conclusion: it is expected to raise awareness among health professionals to provide a better assistance to the institutionalized elderly focusing on the real needs of these persons.
Resumo:
The development of Next Generation Sequencing promotes Biology in the Big Data era. The ever-increasing gap between proteins with known sequences and those with a complete functional annotation requires computational methods for automatic structure and functional annotation. My research has been focusing on proteins and led so far to the development of three novel tools, DeepREx, E-SNPs&GO and ISPRED-SEQ, based on Machine and Deep Learning approaches. DeepREx computes the solvent exposure of residues in a protein chain. This problem is relevant for the definition of structural constraints regarding the possible folding of the protein. DeepREx exploits Long Short-Term Memory layers to capture residue-level interactions between positions distant in the sequence, achieving state-of-the-art performances. With DeepRex, I conducted a large-scale analysis investigating the relationship between solvent exposure of a residue and its probability to be pathogenic upon mutation. E-SNPs&GO predicts the pathogenicity of a Single Residue Variation. Variations occurring on a protein sequence can have different effects, possibly leading to the onset of diseases. E-SNPs&GO exploits protein embeddings generated by two novel Protein Language Models (PLMs), as well as a new way of representing functional information coming from the Gene Ontology. The method achieves state-of-the-art performances and is extremely time-efficient when compared to traditional approaches. ISPRED-SEQ predicts the presence of Protein-Protein Interaction sites in a protein sequence. Knowing how a protein interacts with other molecules is crucial for accurate functional characterization. ISPRED-SEQ exploits a convolutional layer to parse local context after embedding the protein sequence with two novel PLMs, greatly surpassing the current state-of-the-art. All methods are published in international journals and are available as user-friendly web servers. They have been developed keeping in mind standard guidelines for FAIRness (FAIR: Findable, Accessible, Interoperable, Reusable) and are integrated into the public collection of tools provided by ELIXIR, the European infrastructure for Bioinformatics.
Resumo:
Smart cameras allow pre-processing of video data on the camera instead of sending it to a remote server for further analysis. Having a network of smart cameras allows various vision tasks to be processed in a distributed fashion. While cameras may have different tasks, we concentrate on distributed tracking in smart camera networks. This application introduces various highly interesting problems. Firstly, how can conflicting goals be satisfied such as cameras in the network try to track objects while also trying to keep communication overhead low? Secondly, how can cameras in the network self adapt in response to the behavior of objects and changes in scenarios, to ensure continued efficient performance? Thirdly, how can cameras organise themselves to improve the overall network's performance and efficiency? This paper presents a simulation environment, called CamSim, allowing distributed self-adaptation and self-organisation algorithms to be tested, without setting up a physical smart camera network. The simulation tool is written in Java and hence allows high portability between different operating systems. Relaxing various problems of computer vision and network communication enables a focus on implementing and testing new self-adaptation and self-organisation algorithms for cameras to use.