908 resultados para computer-based instrumentation
Resumo:
Im vorliegenden Beitrag wird ein Modell für die Beschreibung und Berechnung der Geometrie manuell bedienter Lagersysteme entwickelt und validiert, welches die Grundlage für eine rechnergestützte Planungssystematik bildet. Der modulare Aufbau des Modells ermöglicht durch entsprechende Erweiterungen die Berechnung des Lagerplatzbedarfs unter Berücksichtigung von Lagerstrategien. Auch die Betrachtung von Speziallösungen wie der Verwendung unterschiedlich hoher Lagerplätze ist möglich. Abschließend wird das vorgestellte Modell auf seine Anwendbarkeit zur systematischen Erzeugung verschiedener Layouts als Lösungsalternativen untersucht.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
In this short note we present the approximate construction of closed Poncelet configurations using the simulation of a mathematical pendulum. Although the idea goes back to the work of Jacobi, only the use of modern computer technologies assures the success of the construction. We present also some remarks on using such problems in project based university courses and we present a Matlab program able to produce animated Poncelet configurations with given period. In the same spirit we construct Steiner configurations and we give a few teaching oriented remarks on the Poncelet grid theorem. (DIPF/authors)
Resumo:
Children develop in a sea of reciprocal social interaction, but their brain development is predominately studied in non-interactive contexts (e.g., viewing photographs of faces). This dissertation investigated how the developing brain supports social interaction. Specifically, novel paradigms were used to target two facets of social experience—social communication and social motivation—across three studies in children and adults. In Study 1, adults listened to short vignettes—which contained no social information—that they believed to be either prerecorded or presented over an audio-feed by a live social partner. Simply believing that speech was from a live social partner increased activation in the brain’s mentalizing network—a network involved in thinking about others’ thoughts. Study 2 extended this paradigm to middle childhood, a time of increasing social competence and social network complexity, as well as structural and functional social brain development. Results showed that, as in adults, regions of the mentalizing network were engaged by live speech. Taken together, these findings indicate that the mentalizing network may support the processing of interactive communicative cues across development. Given this established importance of social-interactive context, Study 3 examined children’s social motivation when they believed they were engaged in a computer-based chat with a peer. Children initiated interaction via sharing information about their likes and hobbies and received responses from the peer. Compared to a non-social control, in which children chatted with a computer, peer interaction increased activation in mentalizing regions and reward circuitry. Further, within mentalizing regions, responsivity to the peer increased with age. Thus, across all three studies, social cognitive regions associated with mentalizing supported real-time social interaction. In contrast, the specific social context appeared to influence both reward circuitry involvement and age-related changes in neural activity. Future studies should continue to examine how the brain supports interaction across varied real-world social contexts. In addition to illuminating typical development, understanding the neural bases of interaction will offer insight into social disabilities such as autism, where social difficulties are often most acute in interactive situations. Ultimately, to best capture human experience, social neuroscience ought to be embedded in the social world.
Resumo:
Like many other higher educat ion schools, ISCAP`s population has grown at a rate of almost 100% in the end of the twentieth century. Its administrative structures were reinforced, but it was not in the same proportion. Face to face with the inability to resolve the problem, the administration decided to implement a computer based system, available in the Internet. In a first stage, in 1997, the system was implemented as a services support. The next stage, in 1999, proposes to increase student services quality. A project that aims to bring student services available on the Internet begins to be developed.
Resumo:
Weed management has become increasingly challenging for cotton growers in Australia in the last decade. Glyphosate, the cornerstone of weed management in the industry, is waning in effectiveness as a result of the evolution of resistance in several species. One of these, awnless barnyard grass, is very common in Australian cotton fields, and is a prime example of the new difficulties facing growers in choosing effective and affordable management strategies. RIM (Ryegrass Integrated Management) is a computer-based decision support tool developed for the south-western Australian grains industry. It is commonly used there as a tool for grower engagement in weed management thinking and strategy development. We used RIM as the basis for a new tool that can fulfil the same types of functions for subtropical Australian cotton-grains farming systems. The new tool, BYGUM, provides growers with a robust means to evaluate five-year rotations including testing the economic value of fallows and fallow weed management, winter and summer cropping, cover crops, tillage, different herbicide options, herbicide resistance management, and more. The new model includes several northernregion- specific enhancements: winter and summer fallows, subtropical crop choices, barnyard grass seed bank, competition, and ecology parameters, and more freedom in weed control applications. We anticipate that BYGUM will become a key tool for teaching and driving the changes that will be needed to maintain sound weed management in cotton in the near future.
Resumo:
Die Jahrestagung der Gesellschaft für Didaktik der Mathematik fand im Jahr 2015 zum dritten Mal in der Schweiz statt. [...] Mit rund 300 Vorträgen, 16 moderierten Sektionen, 15 Arbeitskreistreffen und 21 Posterpräsentationen eröffnete sich ein breites Spektrum an Themen und unterschiedlichen Zugangsweisen zur Erforschung von Fragen rund um das Lernen und Lehren von Mathematik. (DIPF/Orig.)
Resumo:
Part 11: Reference and Conceptual Models
Resumo:
The paper catalogues the procedures and steps involved in agroclimatic classification. These vary from conventional descriptive methods to modern computer-based numerical techniques. There are three mutually independent numerical classification techniques, namely Ordination, Cluster analysis, and Minimum spanning tree; and under each technique there are several forms of grouping techniques existing. The vhoice of numerical classification procedure differs with the type of data set. In the case of numerical continuous data sets with booth positive and negative values, the simple and least controversial procedures are unweighted pair group method (UPGMA) and weighted pair group method (WPGMA) under clustering techniques with similarity measure obtained either from Gower metric or standardized Euclidean metric. Where the number of attributes are large, these could be reduced to fewer new attributes defined by the principal components or coordinates by ordination technique. The first few components or coodinates explain the maximum variance in the data matrix. These revided attributes are less affected by noise in the data set. It is possible to check misclassifications using minimum spanning tree.
Resumo:
Context: To assess the efficacy of preoperative chemotherapy in Wilms’ tumor patients and explore its true value for specific subgroups. Objectives: In the presence of these controversies, a meta-analysis that examines the efficacy of preoperative chemotherapy in Wilms’ tumor patients and specific subgroups is needed to clarify these issues. The objective of this meta-analysis is to assess the efficacy of preoperative chemotherapy in Wilms’ tumor patients and explore its true value for specific subgroups. Data Sources: Computer-based systematic search with “preoperative chemotherapy”, “Neoadjuvant Therapy” and “Wilms’ tumor” as search terms till January 2013 was performed. Study Selection: No language restrictions were applied. Searches were limited to randomized clinical trials (RCTs) or retrospective studies in human participants under 18 years. A manual examination of references in selected articles was also performed. Data Extraction: Relative Risk (RR) and their 95% Confidence Interval (CI) for Tumor Shrinkage (TS), total Tumor Resection (TR), Event-Free Survival (EFS) and details of subgroup analysis were extracted. Meta-analysis was carried out with the help of the software STATA 11.0. Finally, four original Randomized Clinical Trials (RCTs) and 28 retrospective studies with 2375 patients were included. Results: For preoperative chemotherapy vs. up-front surgery (PC vs. SU) group, the pooled RR was 9.109 for TS (95% CI: 5.109 - 16.241; P < 0.001), 1.291 for TR (95% CI: 1.124 - 1.483; P < 0.001) and 1.101 for EFS (95% CI: 0.980 - 1.238; P = 0.106). For subgroup short course vs. long course (SC vs. LC), the pooled RR was 1.097 for TS (95% CI: 0.784 - 1.563; P = 0.587), 1.197 for TR (95% CI: 0.960 - 1.493; P = 0.110) and 1.006 for EFS (95% CI: 0.910 - 1.250; P = 0.430). Conclusions: Short course preoperative chemotherapy is as effective as long course and preoperative chemotherapy only benefits Wilms’ tumor patients in tumor shrinkage and resection but not event-free survival.
Resumo:
There is a growing societal need to address the increasing prevalence of behavioral health issues, such as obesity, alcohol or drug use, and general lack of treatment adherence for a variety of health problems. The statistics, worldwide and in the USA, are daunting. Excessive alcohol use is the third leading preventable cause of death in the United States (with 79,000 deaths annually), and is responsible for a wide range of health and social problems. On the positive side though, these behavioral health issues (and associated possible diseases) can often be prevented with relatively simple lifestyle changes, such as losing weight with a diet and/or physical exercise, or learning how to reduce alcohol consumption. Medicine has therefore started to move toward finding ways of preventively promoting wellness, rather than solely treating already established illness.^ Evidence-based patient-centered Brief Motivational Interviewing (BMI) interventions have been found particularly effective in helping people find intrinsic motivation to change problem behaviors after short counseling sessions, and to maintain healthy lifestyles over the long-term. Lack of locally available personnel well-trained in BMI, however, often limits access to successful interventions for people in need. To fill this accessibility gap, Computer-Based Interventions (CBIs) have started to emerge. Success of the CBIs, however, critically relies on insuring engagement and retention of CBI users so that they remain motivated to use these systems and come back to use them over the long term as necessary.^ Because of their text-only interfaces, current CBIs can therefore only express limited empathy and rapport, which are the most important factors of health interventions. Fortunately, in the last decade, computer science research has progressed in the design of simulated human characters with anthropomorphic communicative abilities. Virtual characters interact using humans’ innate communication modalities, such as facial expressions, body language, speech, and natural language understanding. By advancing research in Artificial Intelligence (AI), we can improve the ability of artificial agents to help us solve CBI problems.^ To facilitate successful communication and social interaction between artificial agents and human partners, it is essential that aspects of human social behavior, especially empathy and rapport, be considered when designing human-computer interfaces. Hence, the goal of the present dissertation is to provide a computational model of rapport to enhance an artificial agent’s social behavior, and to provide an experimental tool for the psychological theories shaping the model. Parts of this thesis were already published in [LYL+12, AYL12, AL13, ALYR13, LAYR13, YALR13, ALY14].^
Resumo:
Abstract Purpose – To study the possibility of using a NIR hardware solution to photograph individuals in a private vehicle as well as an analysis of its images. Method – A study of existing theories around the NIR-method’s performance in selected conditions and individual tests were performed to examine if the literature statements were valid for this study. Two empirical tests have been carried out, the first was carried out at Kapsch test track and the other outside the test track in a single stationary test. An interview which formed the basis for the assessment on the quality of empirical data with a focus on computer-based detection of the number of individuals in the vehicle. Findings – The results have demonstrated the potential of the NIR method’s performance in a fully automated detection system for the number of individuals inside the vehicle. Empirical data indicates that the method can depict individuals inside vehicles of sufficiently high quality, but it is greatly affected by reflections, weather and light conditions. Implications – Result supports the assumption that the NIR method using an external light source can be used to image the interior through a varying number of weather and lightning conditions. The study originated until the results suggested that a NIR-based hardware setup can create images with high enough quality for the human eye to be able to detect the number of individuals inside the vehicle.If the overall performance into account, it suggests that the main problem with the use of the hardware set is to maintain the quality of the whole sample and that the crucial variables for the method’s performance is the influence of light and reflection conditions. Limitations – The major limitations have been that we limited ourselves to a subjective analysis of the selection and assessment on the image features for computer-based detection of the number of individuals in the vehicle. We were limited to two tests, one in tough conditions where only the driver was in the vehicle and the second stationary test, where the focus was on the number of people in vehicles and light sources impact on the result.
Resumo:
In knowledge technology work, as expressed by the scope of this conference, there are a number of communities, each uncovering new methods, theories, and practices. The Library and Information Science (LIS) community is one such community. This community, through tradition and innovation, theories and practice, organizes knowledge and develops knowledge technologies formed by iterative research hewn to the values of equal access and discovery for all. The Information Modeling community is another contributor to knowledge technologies. It concerns itself with the construction of symbolic models that capture the meaning of information and organize it in ways that are computer-based, but human understandable. A recent paper that examines certain assumptions in information modeling builds a bridge between these two communities, offering a forum for a discussion on common aims from a common perspective. In a June 2000 article, Parsons and Wand separate classes from instances in information modeling in order to free instances from what they call the “tyranny” of classes. They attribute a number of problems in information modeling to inherent classification – or the disregard for the fact that instances can be conceptualized independent of any class assignment. By faceting instances from classes, Parsons and Wand strike a sonorous chord with classification theory as understood in LIS. In the practice community and in the publications of LIS, faceted classification has shifted the paradigm of knowledge organization theory in the twentieth century. Here, with the proposal of inherent classification and the resulting layered information modeling, a clear line joins both the LIS classification theory community and the information modeling community. Both communities have their eyes turned toward networked resource discovery, and with this conceptual conjunction a new paradigmatic conversation can take place. Parsons and Wand propose that the layered information model can facilitate schema integration, schema evolution, and interoperability. These three spheres in information modeling have their own connotation, but are not distant from the aims of classification research in LIS. In this new conceptual conjunction, established by Parsons and Ward, information modeling through the layered information model, can expand the horizons of classification theory beyond LIS, promoting a cross-fertilization of ideas on the interoperability of subject access tools like classification schemes, thesauri, taxonomies, and ontologies. This paper examines the common ground between the layered information model and faceted classification, establishing a vocabulary and outlining some common principles. It then turns to the issue of schema and the horizons of conventional classification and the differences between Information Modeling and Library and Information Science. Finally, a framework is proposed that deploys an interpretation of the layered information modeling approach in a knowledge technologies context. In order to design subject access systems that will integrate, evolve and interoperate in a networked environment, knowledge organization specialists must consider a semantic class independence like Parsons and Wand propose for information modeling.
Resumo:
This paper proposes an architecture for machining process and production monitoring to be applied in machine tools with open Computer numerical control (CNC). A brief description of the advantages of using open CNC for machining process and production monitoring is presented with an emphasis on the CNC architecture using a personal computer (PC)-based human-machine interface. The proposed architecture uses the CNC data and sensors to gather information about the machining process and production. It allows the development of different levels of monitoring systems with mininium investment, minimum need for sensor installation, and low intrusiveness to the process. Successful examples of the utilization of this architecture in a laboratory environment are briefly described. As a Conclusion, it is shown that a wide range of monitoring solutions can be implemented in production processes using the proposed architecture.