942 resultados para Knowledge of Geometry
Resumo:
One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.
Resumo:
This article is intended to state that Technical Drawing is a multiple tool of expression and communication essential to develop inquiry processes, the scientifically basis and comprehension of drawings and technological designs that can be manufactured. We demonstrate graphically and analytically that spatial vision and graphic thinking allow us to identify graphically real life problems, develop proposals of solutions to be analysed from different points of view, plan and develop the project, provide information needed to make decisions on objects and technological processes. From the knowledge of Technical Drawing and CAD tools we have developed graphic analyses to improve and optimize our proposed modification of the geometry of the rectangular cells of conventional bricks by hexagonal cells, which is protected by a Spanish patent owned by the Polytechnic University of Madrid. This new internal geometry of the bricks will improve the efficiency and the acoustic damping of walls built with the ceramic bricks of horizontal hollow, maintaining the same size of the conventional bricks, without increasing costs either in the manufacture and the sale. A single brick will achieve the width equivalent to more than FOUR conventional bricks.
Resumo:
We introduce a method of functionally classifying genes by using gene expression data from DNA microarray hybridization experiments. The method is based on the theory of support vector machines (SVMs). SVMs are considered a supervised computer learning method because they exploit prior knowledge of gene function to identify unknown genes of similar function from expression data. SVMs avoid several problems associated with unsupervised clustering methods, such as hierarchical clustering and self-organizing maps. SVMs have many mathematical features that make them attractive for gene expression analysis, including their flexibility in choosing a similarity function, sparseness of solution when dealing with large data sets, the ability to handle large feature spaces, and the ability to identify outliers. We test several SVMs that use different similarity metrics, as well as some other supervised learning methods, and find that the SVMs best identify sets of genes with a common function using expression data. Finally, we use SVMs to predict functional roles for uncharacterized yeast ORFs based on their expression data.
Resumo:
Carbon nanotubes exhibit the structure and chemical properties that make them apt substrates for many adsorption applications. Of particular interest are carbon nanotube bundles, whose unique geometry is conducive to the formation of pseudo-one-dimensional phases of matter, and graphite, whose simple planar structure allows ordered phases to form in the absence of surface effects. Although both of these structures have been the focus of many research studies, knowledge gaps still remain. Much of the work with carbon nanotubes has used simple adsorbates1-43, and there is little kinetic data available. On the other hand, there are many studies of complex molecules adsorbing on graphite; however, there is almost no kinetic data reported for this substrate. We seek to close these knowledge gaps by performing a kinetic study of linear molecules of increasing length adsorbing on carbon nanotube bundles and on graphite. We elucidated the process of adsorption of complex admolecules on carbon nanotube bundles, while at the same time producing some of the first equilibrium results of the films formed by large adsorbates on these structures. We also extended the current knowledge of adsorption on graphite to include the kinetics of adsorption. The kinetic data that we have produced enables a more complete understanding of the process of adsorption of large admolecules on carbon nanotube bundles and graphite. We studied the adsorption of particles on carbon nanotube bundles and graphite using analytical and computational techniques. By employing these methods separately but in parallel, we were able to constantly compare and verify our results. We calculated and simulated the behavior of a given system throughout its evolution and then analyzed our results to determine which system parameters have the greatest effect on the kinetics of adsorption. Our analytical and computational results show good agreement with each other and with the experimental isotherm data provided by our collaborators. As a result of this project, we have gained a better understanding of the kinetics of adsorption. We have learned about the equilibration process of dimers on carbon nanotube bundles, identifying the “filling effect”, which increases the rate of total uptake, and explaining the cause of the transient “overshoot” in the coverage of the surface. We also measured the kinetic effect of particle-particle interactions between neighboring adsorbates on the lattice. For our simulations of monomers adsorbing on graphite, we succeeded in developing an analytical equation to predict the characteristic time as a function of chemical potential and of the adsorption and interaction energies of the system. We were able to further explore the processes of adsorption of dimers and trimers on graphite (again observing the filling effect and the overshoot). Finally, we were able to show that the kinetic behaviors of monomers, dimers, and trimers that have been reported in experimental results also arise organically from our model and simulations.
Resumo:
Background: HPV vaccine coverage is far from ideal in Valencia, Spain, and this could be partially related to the low knowledge about the disease and the vaccine, therefore we assessed these, as well as the attitude towards vaccination in adolescent girls, and tried to identify independently associated factors that could potentially be modified by an intervention in order to increase vaccine coverage. Methods: A cross sectional study was conducted in a random selection of schools of the Spanish region of Valencia. We asked mothers of 1278 girls, who should have been vaccinated in the 2011 campaign, for informed consent. Those that accepted their daughters’ participation, a questionnaire regarding the Knowledge of HPV infection and vaccine was passed to the girls in the school. Results: 833 mothers (65.1%) accepted participation. All their daughters’ responded the questionnaire. Of those, 89.9% had heard about HPV and they associated it to cervical cancer. Only 14% related it to other problems like genital warts. The knowledge score of the girls who had heard about HPV was 6.1/10. Knowledge was unrelated to the number of contacts with the health system (Pediatrician or nurse), and positively correlated with the discussions with classmates about the vaccine. Adolescents Spanish in origin or with an older sister vaccinated, had higher punctuation. 67% of the girls thought that the vaccine prevented cancer, and 22.6% felt that although prevented cancer the vaccine had important safety problems. 6.4% of the girls rejected the vaccine for safety problems or for not considering themselves at risk of infection. 71.5% of the girls had received at least one vaccine dose. Vaccinated girls scored higher knowledge (p = 0.05). Conclusion: Knowledge about HPV infection and vaccine was fair in adolescents of Valencia, and is independent to the number of contacts with the health system, it is however correlated to the conversations about the vaccine with their peers and the vaccination status. An action to improve HPV knowledge through health providers might increase vaccine coverage in the adolescents.
Resumo:
In 2003 there was an increase in the use of pulmonary artery catheters in Australia from 12, 000 to 16, 000 units in intensive care and peri-operative care. This survey of intensive care nurses in five intensive care units in Queensland addressed knowledge of use, safety and complications of the pulmonary artery catheter, using a previously validated 31 question multiple choice survey. One hundred and thirty-nine questionnaires were completed, a response rate of 46%. The mean score was 13.3, standard deviation +/-4.2 out of a total of 31 (42.8% correct). The range was 4 to 25. Scores were significantly higher in those participants with more ICU experience, higher nursing grade, a higher self-assessed level of knowledge and greater frequency of PAC supervision. There was no significant correlation between total score and hospital- or university-based education, or total score and public or private hospital participants. Fifty-one per cent were unable to correctly identify the significant pressure change as the catheter is advanced from the right ventricle to the pulmonary artery.
Resumo:
As a knowable object, the human body is highly complex. Evidence from several converging lines of research, including psychological studies, neuroimaging and clinical neuropsychology, indicates that human body knowledge is widely distributed in the adult brain, and is instantiated in at least three partially independent levels of representation. Sensori-motor body knowledge is responsible for on-line control and movement of one's own body and may also contribute to the perception of others' moving bodies; visuo-spatial body knowledge specifies detailed structural descriptions of the spatial attributes of the human body; and lexical-semantic body knowledge contains language-based knowledge about the human body. In the first chapter of this Monograph, we outline the evidence for these three hypothesized levels of human body knowledge, then review relevant literature on infants' and young children's human body knowledge in terms of the three-level framework. In Chapters II and III, we report two complimentary series of studies that specifically investigate the emergence of visuospatial body knowledge in infancy. Our technique is to compare infants' responses to typical and scrambled human bodies, in order to evaluate when and how infants acquire knowledge about the canonical spatial layout of the human body. Data from a series of visual habituation studies indicate that infants first discriminate scrambled from typical human body pictures at 15 to 18 months of age. Data from object examination studies similarly indicate that infants are sensitive to violations of three-dimensional human body stimuli starting at 15-18 months of age. The overall pattern of data supports several conclusions about the early development of human body knowledge: (a) detailed visuo-spatial knowledge about the human body is first evident in the second year of life, (b) visuo-spatial knowledge of human faces and human bodies are at least partially independent in infancy and (c) infants' initial visuo-spatial human body representations appear to be highly schematic, becoming more detailed and specific with development. In the final chapter, we explore these conclusions and discuss how levels of body knowledge may interact in early development.
Resumo:
Although information systems (IS) problem solving involves knowledge of both the IS and application domains, little attention has been paid to the role of application domain knowledge. In this study, which is set in the context of conceptual modeling, we examine the effects of both IS and application domain knowledge on different types of schema understanding tasks: syntactic and semantic comprehension tasks and schema-based problem-solving tasks. Our thesis was that while IS domain knowledge is important in solving all such tasks, the role of application domain knowledge is contingent upon the type of understanding task under investigation. We use the theory of cognitive fit to establish theoretical differences in the role of application domain knowledge among the different types of schema understanding tasks. We hypothesize that application domain knowledge does not influence the solution of syntactic and semantic comprehension tasks for which cognitive fit exists, but does influence the solution of schema-based problem-solving tasks for which cognitive fit does not exist. To assess performance on different types of conceptual schema understanding tasks, we conducted a laboratory experiment in which participants with high- and low-IS domain knowledge responded to two equivalent conceptual schemas that represented high and low levels of application knowledge (familiar and unfamiliar application domains). As expected, we found that IS domain knowledge is important in the solution of all types of conceptual schema understanding tasks in both familiar and unfamiliar applications domains, and that the effect of application domain knowledge is contingent on task type. Our findings for the EER model were similar to those for the ER model. Given the differential effects of application domain knowledge on different types of tasks, this study highlights the importance of considering more than one application domain in designing future studies on conceptual modeling.
Resumo:
Promoted-ignition testing on carbon steel rods of varying cross-sectional area and shape was performed in high pressure oxygen to assess the effect of sample geometry on the regression rate of the melting interface. Cylindrical and rectangular geometries and three different cross sections were tested and the regression rates of the cylinders were compared to the regression rates of the rectangular samples at test pressures around 6.9 MPa. Tests were recorded and video analysis used to determine the regression rate of the melting interface by a new method based on a drop cycle which was found to provide a good basis for statistical analysis and provide excellent agreement to the standard averaging methods used. Both geometries tested showed the typical trend of decreasing regression rate of the melting interface with increasing cross-sectional area; however, it was shown that the effect of geometry is more significant as the sample's cross sections become larger. Discussion is provided regarding the use of 3.2-mm square rods rather than 3.2-mm cylindrical rods within the standard ASTM test and any effect this may have on the observed regression rate of the melting interface.
Resumo:
Objective: To examine the methods used by a sample of regular ecstasy users to determine the content and purity of ecstasy pills, their knowledge of the limitations of available pill testing methods, and how pill test results would influence their drug use behaviour. Method: Data were collected from regular ecstasy users (n = 810) recruited from all eight capital cities of Australia. Data were analysed using multiple logistic regression and chi-square (chi(2)) tests of association. Open-ended responses were coded for themes. Results: The majority of the sample(84%) reported attempting to find out the content and purity of ecstasy at least some of the time, most commonly asking friends or dealers. Less than one quarter (22%) reported personal use of testing kits. There was a moderate level of awareness of the limitations of testing kits among those who reported having used them. Over half (57%) of those reporting personal use of testing kits reported that they would not take a pill if test results indicated that it contained ketamine and over three quarters (76%) reported that they would not take an "unknown" pill (producing no reaction in a reagent test). Finally, a considerable majority (63%) expressed interest in pill testing should it be more widely available. Conclusions: The majority of regular ecstasy users sampled in this Australian study report previous attempts to determine the content and purity of pills sold as ecstasy. Although only a small proportion have used testing kits, many report that they would do so if they were more widely available. The results of pill tests may influence drug use if they indicate that pills contain substances which ecstasy users do not want to ingest or are of unknown content. More detailed research examining ways in which pill testing may influence drug use is required to inform evidence-based policy. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
We surveyed all nurses working at a tertiary paediatric hospital (except casual staff and those who were on leave) from 27 hospital departments. A total of 365 questionnaires were distributed. There were 40 questions in six sections: demographic details, knowledge of e-health, relevance of e-health to nursing profession, computing skills, Internet use and access to e-health education. A total of 253 surveys were completed (69%). Most respondents reported that that they had never had e-health education of any sort (87%) and their e-health knowledge and skills were low (71%). However, 11% of nurses reported some exposure to e-health through their work. Over half (56%) of respondents indicated that e-health was important, very important or critical for health professions while 26% were not sure. The lack of education and training was considered by most respondents (71%) to be the main barrier to adopting e-health. While nurses seemed to have moderate awareness of the potential benefits of e-health, their practical skills and knowledge of the topic were very limited.
Resumo:
Calculating the potentials on the heart’s epicardial surface from the body surface potentials constitutes one form of inverse problems in electrocardiography (ECG). Since these problems are ill-posed, one approach is to use zero-order Tikhonov regularization, where the squared norms of both the residual and the solution are minimized, with a relative weight determined by the regularization parameter. In this paper, we used three different methods to choose the regularization parameter in the inverse solutions of ECG. The three methods include the L-curve, the generalized cross validation (GCV) and the discrepancy principle (DP). Among them, the GCV method has received less attention in solutions to ECG inverse problems than the other methods. Since the DP approach needs knowledge of norm of noises, we used a model function to estimate the noise. The performance of various methods was compared using a concentric sphere model and a real geometry heart-torso model with a distribution of current dipoles placed inside the heart model as the source. Gaussian measurement noises were added to the body surface potentials. The results show that the three methods all produce good inverse solutions with little noise; but, as the noise increases, the DP approach produces better results than the L-curve and GCV methods, particularly in the real geometry model. Both the GCV and L-curve methods perform well in low to medium noise situations.
Resumo:
The paper illustrates the role of world knowledge in comprehending and translating texts. A short news item, which displays world knowledge fairly implicitly in condensed lexical forms, was translated by students from English into German. It is shown that their translation strategies changed from a first draft which was rather close to the surface structure of the source text to a final version which took situational aspects, texttypological conventions and the different background knowledge of the respective addressees into account. Decisions on how much world knowledge has to be made explicit in the target text, however, must be based on the relevance principle. Consequences for teaching and for the notions of semantic knowledge and world knowledge are discussed.
Resumo:
The relationship between theory and practice has been discussed in the social sciences for generations. Academics from management and organization studies regularly lament the divide between theory and practice. They regret the insufficient academic knowledge of managerial problems and their solutions, and criticize the scholarly production of theories that are not relevant for organizational practice (Hambrick 1994). Despite the prevalence of this topic in academic discourse, we do not know much about what kind of academic knowledge would be useful to practice, how it would be produced and how the transfer of knowledge between theory and practice actually works. In short, we do not know how we can make academic work more relevant for practice or even whether this would be desirable. In this introduction to the Special Issue, we apply philosophical, theoretical and empirical perspectives to examine the challenges of studying the generation and use of academic knowledge. We then briefly describe the contribution of the seven papers that were selected for this Special Issue. Finally, we discuss issues that still need to be addressed, and make some proposals for future avenues of research.
Resumo:
Grafting of antioxidants and other modifiers onto polymers by reactive extrusion, has been performed successfully by the Polymer Processing and Performance Group at Aston University. Traditionally the optimum conditions for the grafting process have been established within a Brabender internal mixer. Transfer of this batch process to a continuous processor, such as an extruder, has, typically, been empirical. To have more confidence in the success of direct transfer of the process requires knowledge of, and comparison between, residence times, mixing intensities, shear rates and flow regimes in the internal mixer and in the continuous processor.The continuous processor chosen for the current work in the closely intermeshing, co-rotating twin-screw extruder (CICo-TSE). CICo-TSEs contain screw elements that convey material with a self-wiping action and are widely used for polymer compounding and blending. Of the different mixing modules contained within the CICo-TSE, the trilobal elements, which impose intensive mixing, and the mixing discs, which impose extensive mixing, are of importance when establishing the intensity of mixing. In this thesis, the flow patterns within the various regions of the single-flighted conveying screw elements and within both the trilobal element and mixing disc zones of a Betol BTS40 CICo-TSE, have been modelled using the computational fluid dynamics package Polyflow. A major obstacle encountered when solving the flow problem within all of these sets of elements, arises from both the complex geometry and the time-dependent flow boundaries as the elements rotate about their fixed axes. Simulation of the time dependent boundaries was overcome by selecting a number of sequential 2D and 3D geometries, used to represent partial mixing cycles. The flow fields were simulated using the ideal rheological properties of polypropylene and characterised in terms of velocity vectors, shear stresses generated and a parameter known as the mixing efficiency. The majority of the large 3D simulations were performed on the Cray J90 supercomputer situated at the Rutherford-Appleton laboratories, with pre- and postprocessing operations achieved via a Silicon Graphics Indy workstation. A mechanical model was constructed consisting of various CICo-TSE elements rotating within a transparent outer barrel. A technique has been developed using coloured viscous clays whereby the flow patterns and mixing characteristics within the CICo-TSE may be visualised. In order to test and verify the simulated predictions, the patterns observed within the mechanical model were compared with the flow patterns predicted by the computational model. The flow patterns within the single-flighted conveying screw elements in particular, showed good agreement between the experimental and simulated results.