792 resultados para computer-based technology
Resumo:
The overall goal of the study was to describe nurses’ acceptance of an Internet-based support system in the care of adolescents with depression. The data were collected in four phases during the period 2006 – 2010 from nurses working in adolescent psychiatric outpatient clinics and from professionals working with adolescents in basic public services. In the first phase, the nurses’ anticipated perceptions of the usefulness of the Internet-based support system before its implementation was explored. In the second phase, the nurses’ perceived ease of computer and Internet use and attitudes toward it were explored. In the third phase, the features of the support system and its implementation process were described. In the fourth phase, the nurses’ experiences of behavioural intention and actual system use of the Internet-based support were described in psychiatric out-patient care after one year use. The Technology Acceptance Model (TAM) was used to structure the various research phases. Several benefits were identified from the nurses’ perspective in using the Internet-based support system in the care of adolescents with depression. The nurses’ technology skills were good and their attitudes towards computer use were positive. The support system was developed in various phases to meet the adolescents’ needs. Before the implementation of the information technology (IT)-based support system, it is important to pay attention to the nurses’ IT-training, technology support, resources, and safety as well as ethical issues related to the support system. After one year of using the system, the nurses perceived the Internet-based support system to be useful in the care of adolescents with depression. The adolescents’ independent work with the support system at home and the program’s systematic character were experienced as conducive from the point of view of the treatment. However, the Internet-based support system was integrated only partly into the nurseadolescent interaction even though the nurses’ perceptions of it were positive. The use of the IT-based system as part of the adolescents’ depression care was seen positively and its benefits were recognized. This serves as a good basis for future IT-based techniques. Successful implementations of IT-based support systems need a systematic implementation plan and commitment from the part of the organization and its managers. Supporting and evaluating the implementation of an IT-based system should pay attention to changing the nurses’ work styles. Health care organizations should be offered more flexible opportunities to utilize IT-based systems in direct patient care in the future.
Resumo:
Object detection is a fundamental task of computer vision that is utilized as a core part in a number of industrial and scientific applications, for example, in robotics, where objects need to be correctly detected and localized prior to being grasped and manipulated. Existing object detectors vary in (i) the amount of supervision they need for training, (ii) the type of a learning method adopted (generative or discriminative) and (iii) the amount of spatial information used in the object model (model-free, using no spatial information in the object model, or model-based, with the explicit spatial model of an object). Although some existing methods report good performance in the detection of certain objects, the results tend to be application specific and no universal method has been found that clearly outperforms all others in all areas. This work proposes a novel generative part-based object detector. The generative learning procedure of the developed method allows learning from positive examples only. The detector is based on finding semantically meaningful parts of the object (i.e. a part detector) that can provide additional information to object location, for example, pose. The object class model, i.e. the appearance of the object parts and their spatial variance, constellation, is explicitly modelled in a fully probabilistic manner. The appearance is based on bio-inspired complex-valued Gabor features that are transformed to part probabilities by an unsupervised Gaussian Mixture Model (GMM). The proposed novel randomized GMM enables learning from only a few training examples. The probabilistic spatial model of the part configurations is constructed with a mixture of 2D Gaussians. The appearance of the parts of the object is learned in an object canonical space that removes geometric variations from the part appearance model. Robustness to pose variations is achieved by object pose quantization, which is more efficient than previously used scale and orientation shifts in the Gabor feature space. Performance of the resulting generative object detector is characterized by high recall with low precision, i.e. the generative detector produces large number of false positive detections. Thus a discriminative classifier is used to prune false positive candidate detections produced by the generative detector improving its precision while keeping high recall. Using only a small number of positive examples, the developed object detector performs comparably to state-of-the-art discriminative methods.
Resumo:
This study compared the relative effectiveness of two computerized remedial reading programs in improving the reading word recognition, rate, and comprehension of adolescent readers demonstrating significant and longstanding reading difficulties. One of the programs involved was Autoskill Component Reading Subskills Program, which provides instruction in isolated letters, syllables, and words, to a point of rapid automatic responding. This program also incorporates reading disability subtypes in its approach. The second program, Read It Again. Sam, delivers a repeated reading strategy. The study also examined the feasibility of using peer tutors in association with these two programs. Grade 9 students at a secondary vocational school who satisfied specific criteria with respect to cognitive and reading ability participated. Eighteen students were randomly assigned to three matched groups, based on prior screening on a battery of reading achievement tests. Two I I groups received training with one of the computer programs; the third group acted as a control and received the remedial reading program offered within the regular classroom. The groups met daily with a trained tutor for approximately 35 minutes, and were required to accumulate twenty hours of instruction. At the conclusion of the program, the pretest battery was repeated. No significant differences were found in the treatment effects of the two computer groups. Each of the two treatment groups was able to effect significantly improved reading word recognition and rate, relative to the control group. Comprehension gains were modest. The treatment groups demonstrated a significant gain, relative to the control group, on one of the three comprehension measures; only trends toward a gain were noted on the remaining two measures. The tutoring partnership appeared to be a viable alternative for the teacher seeking to provide individualized computerized remedial programs for adolescent unskilled readers. Both programs took advantage of computer technology in providing individualized drill and practice, instant feedback, and ongoing recordkeeping. With limited cautions, each of these programs was considered effective and practical for use with adolescent unskilled readers.
Resumo:
Although there is a consensus in th~ literature on the many uses of the Internet in education, as well as the unique features of the Internet for presenting facts and information, there is no consensus on a standardized method for evaluating Internetbased courseware. Educators rarely have the opportunity to participate in the development of Internet-based courseware, yet they are encouraged to use the technology in their learning environments. This creates a need for summative evaluation methods for Internet-based health courseware. The purpose ofthis study was to assess evaluative measures for Internet-based courseware. Specifically, two entities were evaluated within the study: a) the outcome of the Internet-based courseware, and b) the Internet-based courseware itself. To this end, the Web site www.bodymatters.com was evaluated using two different approaches by two different cohorts. The first approach was a performance appraisal by a group of endusers. A positive, statistically significant change in the students performance was observed due to the intervention ofthe Web site. The second approach was a productoriented evaluation ofthe Web site with the use of a criterion-based checklist and an open-ended comments section. The findings indicate that a summative, criterion-based evaluation is best completed by a multidisciplinary team. The findi~gs also indicated that the two different cohorts reported different product-oriented appraisals of the Web site. The current research confirmed previous research that found that experts returning a poor evaluation of a Web site did not have a relationship to whether or not the end-users performance improved due to the intervention of the Web site.
Resumo:
While the influence of computer technology has been widely studied in a variety of contexts, the drawing teaching studio is a particularly interesting context because of the juxtaposition of traditional medium and computer technology. For this study, 5 Canadian postsecondary teachers engaged in a 2-round Delphi interview process to discuss their responses to computer technology on their drawing pedagogy. Data sources included transcribed interviews. Findings indicated that artist teachers are both cautious to embrace and curious to explore appropriate use of computer technology on their drawing pedagogy. Artist teachers are both critical and optimistic about the influence of computer technology.
Resumo:
Dans l'apprentissage machine, la classification est le processus d’assigner une nouvelle observation à une certaine catégorie. Les classifieurs qui mettent en œuvre des algorithmes de classification ont été largement étudié au cours des dernières décennies. Les classifieurs traditionnels sont basés sur des algorithmes tels que le SVM et les réseaux de neurones, et sont généralement exécutés par des logiciels sur CPUs qui fait que le système souffre d’un manque de performance et d’une forte consommation d'énergie. Bien que les GPUs puissent être utilisés pour accélérer le calcul de certains classifieurs, leur grande consommation de puissance empêche la technologie d'être mise en œuvre sur des appareils portables tels que les systèmes embarqués. Pour rendre le système de classification plus léger, les classifieurs devraient être capable de fonctionner sur un système matériel plus compact au lieu d'un groupe de CPUs ou GPUs, et les classifieurs eux-mêmes devraient être optimisés pour ce matériel. Dans ce mémoire, nous explorons la mise en œuvre d'un classifieur novateur sur une plate-forme matérielle à base de FPGA. Le classifieur, conçu par Alain Tapp (Université de Montréal), est basé sur une grande quantité de tables de recherche qui forment des circuits arborescents qui effectuent les tâches de classification. Le FPGA semble être un élément fait sur mesure pour mettre en œuvre ce classifieur avec ses riches ressources de tables de recherche et l'architecture à parallélisme élevé. Notre travail montre que les FPGAs peuvent implémenter plusieurs classifieurs et faire les classification sur des images haute définition à une vitesse très élevée.
Science and technology of rubber reclamation with special attention to NR-based waste latex products
Resumo:
A comprehensive overview of reclamation of cured rubber with special emphasis on latex reclamation is depicted in this paper. The latex industry has expanded over the years to meet the world demands for gloves, condoms, latex thread, etc. Due to the strict specifications for the products and the unstable nature of the latex as high as 15% of the final latex products are rejected. As waste latex rubber (WLR) represents a source of high-quality rubber hydrocarbon, it is a potential candidate for generating reclaimed rubber of superior quality. The role of the different components in the reclamation recipe is explained and the reaction mechanism and chemistry during reclamation are discussed in detail. Different types of reclaiming processes are described with special reference to processes, which selectively cleave the cross links in the vulcanized rubber. The state-of-the-art techniques of reclamation with special attention on latex treatment are reviewed. An overview of the latest development concerning the fundamental studies in the field of rubber recycling by means of low-molecular weight compounds is described. A mathematical model description of main-chain and crosslink scission during devulcanization of a rubber vulcanizate is also given.
Resumo:
The thesis introduced the octree and addressed the complete nature of problems encountered, while building and imaging system based on octrees. An efficient Bottom-up recursive algorithm and its iterative counterpart for the raster to octree conversion of CAT scan slices, to improve the speed of generating the octree from the slices, the possibility of utilizing the inherent parallesism in the conversion programme is explored in this thesis. The octree node, which stores the volume information in cube often stores the average density information could lead to “patchy”distribution of density during the image reconstruction. In an attempt to alleviate this problem and explored the possibility of using VQ to represent the imformation contained within a cube. Considering the ease of accommodating the process of compressing the information during the generation of octrees from CAT scan slices, proposed use of wavelet transforms to generate the compressed information in a cube. The modified algorithm for generating octrees from the slices is shown to accommodate the eavelet compression easily. Rendering the stored information in the form of octree is a complex task, necessarily because of the requirement to display the volumetric information. The reys traced from each cube in the octree, sum up the density en-route, accounting for the opacities and transparencies produced due to variations in density.
Resumo:
This work is aimed at building an adaptable frame-based system for processing Dravidian languages. There are about 17 languages in this family and they are spoken by the people of South India.Karaka relations are one of the most important features of Indian languages. They are the semabtuco-syntactic relations between verbs and other related constituents in a sentence. The karaka relations and surface case endings are analyzed for meaning extraction. This approach is comparable with the borad class of case based grammars.The efficiency of this approach is put into test in two applications. One is machine translation and the other is a natural language interface (NLI) for information retrieval from databases. The system mainly consists of a morphological analyzer, local word grouper, a parser for the source language and a sentence generator for the target language. This work make contributios like, it gives an elegant account of the relation between vibhakthi and karaka roles in Dravidian languages. This mapping is elegant and compact. The same basic thing also explains simple and complex sentence in these languages. This suggests that the solution is not just ad hoc but has a deeper underlying unity. This methodology could be extended to other free word order languages. Since the frame designed for meaning representation is general, they are adaptable to other languages coming in this group and to other applications.
Resumo:
Sharing of information with those in need of it has always been an idealistic goal of networked environments. With the proliferation of computer networks, information is so widely distributed among systems, that it is imperative to have well-organized schemes for retrieval and also discovery. This thesis attempts to investigate the problems associated with such schemes and suggests a software architecture, which is aimed towards achieving a meaningful discovery. Usage of information elements as a modelling base for efficient information discovery in distributed systems is demonstrated with the aid of a novel conceptual entity called infotron.The investigations are focused on distributed systems and their associated problems. The study was directed towards identifying suitable software architecture and incorporating the same in an environment where information growth is phenomenal and a proper mechanism for carrying out information discovery becomes feasible. An empirical study undertaken with the aid of an election database of constituencies distributed geographically, provided the insights required. This is manifested in the Election Counting and Reporting Software (ECRS) System. ECRS system is a software system, which is essentially distributed in nature designed to prepare reports to district administrators about the election counting process and to generate other miscellaneous statutory reports.Most of the distributed systems of the nature of ECRS normally will possess a "fragile architecture" which would make them amenable to collapse, with the occurrence of minor faults. This is resolved with the help of the penta-tier architecture proposed, that contained five different technologies at different tiers of the architecture.The results of experiment conducted and its analysis show that such an architecture would help to maintain different components of the software intact in an impermeable manner from any internal or external faults. The architecture thus evolved needed a mechanism to support information processing and discovery. This necessitated the introduction of the noveI concept of infotrons. Further, when a computing machine has to perform any meaningful extraction of information, it is guided by what is termed an infotron dictionary.The other empirical study was to find out which of the two prominent markup languages namely HTML and XML, is best suited for the incorporation of infotrons. A comparative study of 200 documents in HTML and XML was undertaken. The result was in favor ofXML.The concept of infotron and that of infotron dictionary, which were developed, was applied to implement an Information Discovery System (IDS). IDS is essentially, a system, that starts with the infotron(s) supplied as clue(s), and results in brewing the information required to satisfy the need of the information discoverer by utilizing the documents available at its disposal (as information space). The various components of the system and their interaction follows the penta-tier architectural model and therefore can be considered fault-tolerant. IDS is generic in nature and therefore the characteristics and the specifications were drawn up accordingly. Many subsystems interacted with multiple infotron dictionaries that were maintained in the system.In order to demonstrate the working of the IDS and to discover the information without modification of a typical Library Information System (LIS), an Information Discovery in Library Information System (lDLIS) application was developed. IDLIS is essentially a wrapper for the LIS, which maintains all the databases of the library. The purpose was to demonstrate that the functionality of a legacy system could be enhanced with the augmentation of IDS leading to information discovery service. IDLIS demonstrates IDS in action. IDLIS proves that any legacy system could be augmented with IDS effectively to provide the additional functionality of information discovery service.Possible applications of IDS and scope for further research in the field are covered.