65 resultados para Quality systems
Resumo:
Wear is the phenomenon that determines the lifetime of the collector strips. Since wear is an inevitable effect on pantograph-catenary systems, it is necessary to determine optimal operating conditions that can mitigate its effects. In this study we have performed a simulation model of the pantograph-overhead conductor rail system which allows the evaluation of the dynamic conditions of the system through the contact force. With these results we have made an evaluation of the quality of current collection, a calculation of the pantograph wear and a definition of the optimal operation conditions of the pantograph-overhead conductor rail system.
Resumo:
Communications Based Train Control Systems require high quality radio data communications for train signaling and control. Actually most of these systems use 2.4GHz band with proprietary radio transceivers and leaky feeder as distribution system. All them demand a high QoS radio network to improve the efficiency of railway networks. We present narrow band, broad band and data correlated measurements taken in Madrid underground with a transmission system at 2.4 GHz in a test network of 2 km length in subway tunnels. The architecture proposed has a strong overlap in between cells to improve reliability and QoS. The radio planning of the network is carefully described and modeled with narrow band and broadband measurements and statistics. The result is a network with 99.7% of packets transmitted correctly and average propagation delay of 20ms. These results fulfill the specifications QoS of CBTC systems.
Resumo:
The main objective of ventilation systems in case of fire is the reduction of the possible consequences by achieving the best possible conditions for the evacuation of the users and the intervention of the emergency services. In the last years, the required quick response of the ventilation system, from normal to emergency mode, has been improved by the use of automatic and semi-automatic control systems, what reduces the response times through the support to the operators decision taking, and the use of pre-defined strategies. A further step consists on the use of closedloop algorithms, which takes into account not only the initial conditions but their development (air velocity, traffic situation, etc), optimizing the quality of the smoke control process
Resumo:
With the advent of the Universal Technical Standard for Solar Home Systems, procedures to test the compliance of SHS fluorescent lamps with the standard have been developed. Definition of the laboratory testing procedures is a necessary step in any lamp quality assurance procedure. Particular attention has been paid to test simplicity and to affordability, in order to facilitate local application of the testing procedures, for example by the organisations which carry out electrification programmes. The set of test procedures has been applied to a representative collection of 42 lamps from many different countries, directly acquired in the current photovoltaic rural electrification market. Tests apply to: lamp resistance under normal operating conditions; lamp reliability under extreme conditions; under abnormal conditions; and lamp luminosity. Results are discussed and some recommendations for updating the relevant standard are given. The selected technical standard, together with the proposed testing procedures, form the basis of a complete quality assurance tool that can be applied locally in normal electrical laboratories. Full testing of a lamp requires less than one month, which is very reasonable on the context of quality assurance programmes
Resumo:
Inverter features are reviewed from a PV systems perspective, with a view to contributing to possible codes, procurement specifications and testing procedures, in order to assure the technical quality of these systems. A laboratory testing campaign has been carried out on a representative set of sixteen currently available inverters and a set of the most common AC appliances. The results of the tests are discussed with the aim of divulging the particular features of operating AC appliances in PV systems and the provisions to be taken into account in PV system design. The development of testing procedures has followed the motto ?keep it as simple as possible?, in order to make their application easier in conventional laboratories in developing countries.
Resumo:
We describe lpdoc, a tool which generates documentation manuals automatically from one or more logic program source files, written in Ciao, ISO-Prolog, and other (C)LP languages. It is particularly useful for documenting library modules, for which it automatically generates a rich description of the module interface. However, it can also be used quite successfully to document full applications. A fundamental advantage of using lpdoc is that it helps maintaining a true correspondence between the program and its documentation, and also identifying precisely to what versión of the program a given printed manual corresponds. The quality of the documentation generated can be greatly enhanced by including within the program text assertions (declarations with types, modes, etc. ...) for the predicates in the program, and machine-readable comments. One of the main novelties of lpdoc is that these assertions and comments are written using the Ciao system asseriion language, which is also the language of communication between the compiler and the user and between the components of the compiler. This allows a significant synergy among specification, debugging, documentation, optimization, etc. A simple compatibility library allows conventional (C)LP systems to ignore these assertions and comments and treat normally programs documented in this way. The documentation can be generated interactively from emacs or from the command line, in many formats including texinfo, dvi, ps, pdf, info, ascii, html/css, Unix nroff/man, Windows help, etc., and can include bibliographic citations and images, lpdoc can also genérate "man" pages (Unix man page format), nicely formatted plain ASCII "readme" files, installation scripts useful when the manuals are included in software distributions, brief descriptions in html/css or info formats suitable for inclusión in on-line Índices of manuals, and even complete WWW and info sites containing on-line catalogs of documents and software distributions. The lpdoc manual, all other Ciao system manuals, and parts of this paper are generated by lpdoc.
Resumo:
Abstract Idea Management Systems are web applications that implement the notion of open innovation though crowdsourcing. Typically, organizations use those kind of systems to connect to large communities in order to gather ideas for improvement of products or services. Originating from simple suggestion boxes, Idea Management Systems advanced beyond collecting ideas and aspire to be a knowledge management solution capable to select best ideas via collaborative as well as expert assessment methods. In practice, however, the contemporary systems still face a number of problems usually related to information overflow and recognizing questionable quality of submissions with reasonable time and effort allocation. This thesis focuses on idea assessment problem area and contributes a number of solutions that allow to filter, compare and evaluate ideas submitted into an Idea Management System. With respect to Idea Management System interoperability the thesis proposes theoretical model of Idea Life Cycle and formalizes it as the Gi2MO ontology which enables to go beyond the boundaries of a single system to compare and assess innovation in an organization wide or market wide context. Furthermore, based on the ontology, the thesis builds a number of solutions for improving idea assessment via: community opinion analysis (MARL), annotation of idea characteristics (Gi2MO Types) and study of idea relationships (Gi2MO Links). The main achievements of the thesis are: application of theoretical innovation models for practice of Idea Management to successfully recognize the differentiation between communities, opinion metrics and their recognition as a new tool for idea assessment, discovery of new relationship types between ideas and their impact on idea clustering. Finally, the thesis outcome is establishment of Gi2MO Project that serves as an incubator for Idea Management solutions and mature open-source software alternatives for the widely available commercial suites. From the academic point of view the project delivers resources to undertake experiments in the Idea Management Systems area and managed to become a forum that gathered a number of academic and industrial partners. Resumen Los Sistemas de Gestión de Ideas son aplicaciones Web que implementan el concepto de innovación abierta con técnicas de crowdsourcing. Típicamente, las organizaciones utilizan ese tipo de sistemas para conectar con comunidades grandes y así recoger ideas sobre cómo mejorar productos o servicios. Los Sistemas de Gestión de Ideas lian avanzado más allá de recoger simplemente ideas de buzones de sugerencias y ahora aspiran ser una solución de gestión de conocimiento capaz de seleccionar las mejores ideas por medio de técnicas colaborativas, así como métodos de evaluación llevados a cabo por expertos. Sin embargo, en la práctica, los sistemas contemporáneos todavía se enfrentan a una serie de problemas, que, por lo general, están relacionados con la sobrecarga de información y el reconocimiento de las ideas de dudosa calidad con la asignación de un tiempo y un esfuerzo razonables. Esta tesis se centra en el área de la evaluación de ideas y aporta una serie de soluciones que permiten filtrar, comparar y evaluar las ideas publicadas en un Sistema de Gestión de Ideas. Con respecto a la interoperabilidad de los Sistemas de Gestión de Ideas, la tesis propone un modelo teórico del Ciclo de Vida de la Idea y lo formaliza como la ontología Gi2MO que permite ir más allá de los límites de un sistema único para comparar y evaluar la innovación en un contexto amplio dentro de cualquier organización o mercado. Por otra parte, basado en la ontología, la tesis desarrolla una serie de soluciones para mejorar la evaluación de las ideas a través de: análisis de las opiniones de la comunidad (MARL), la anotación de las características de las ideas (Gi2MO Types) y el estudio de las relaciones de las ideas (Gi2MO Links). Los logros principales de la tesis son: la aplicación de los modelos teóricos de innovación para la práctica de Sistemas de Gestión de Ideas para reconocer las diferenciasentre comu¬nidades, métricas de opiniones de comunidad y su reconocimiento como una nueva herramienta para la evaluación de ideas, el descubrimiento de nuevos tipos de relaciones entre ideas y su impacto en la agrupación de estas. Por último, el resultado de tesis es el establecimiento de proyecto Gi2MO que sirve como incubadora de soluciones para Gestión de Ideas y herramientas de código abierto ya maduras como alternativas a otros sistemas comerciales. Desde el punto de vista académico, el proyecto ha provisto de recursos a ciertos experimentos en el área de Sistemas de Gestión de Ideas y logró convertirse en un foro que reunión para un número de socios tanto académicos como industriales.
Resumo:
We describe lpdoc, a tool which generates documentation manuals automatically from one or more logic program source files, written in ISO-Prolog, Ciao, and other (C)LP languages. It is particularly useful for documenting library modules, for which it automatically generates a rich description of the module interface. However, it can also be used quite successfully to document full applications. The documentation can be generated in many formats including t e x i n f o, dvi, ps, pdf, inf o, html/css, Unix nrof f/man, Windows help, etc., and can include bibliographic citations and images, lpdoc can also genérate "man" pages (Unix man page format), nicely formatted plain ascii "readme" files, installation scripts useful when the manuals are included in software distributions, brief descriptions in html/css or inf o formats suitable for inclusión in on-line Índices of manuals, and even complete WWW and inf o sites containing on-line catalogs of documents and software distributions. A fundamental advantage of using lpdoc is that it helps maintaining a true correspondence between the program and its documentation, and also identifying precisely to what versión of the program a given printed manual corresponds. The quality of the documentation generated can be greatly enhanced by including within the program text assertions (declarations with types, modes, etc. ...) for the predicates in the program, and machine-readable comments. These assertions and comments are written using the Ciao system assertion language. A simple compatibility library allows conventional (C)LP systems to ignore these assertions and comments and treat normally programs documented in this way. The lpdoc manual, all other Ciao system manuals, and most of this paper, are generated by lpdoc.
Resumo:
One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.
Resumo:
The aim of this work was to evaluate different management strategies to optimize rabbit production under chronic heat stress. To achieve it, three trials were conducted. In the first trial, to find the optimal cage density in tropical very dry forest condition, were measured growth performance, mortality rate, injured animals and carcass performance over an initial population of 300 cross-breed rabbits of New Zealand, California, Butterfly, Dutch and Satin, weaned at 30 days (535 ± 8 g, standard error). Treatments evaluated were: 6, 12, 18 and 24 rabbits/m2 (3, 6, 9 and 12 rabbits/cage, respectively, each cage of 0.5 m2). The maximal temperature-humidity index indicated a severe heat stress from weaning to 2.2 kg body weight (experimental time). At the end of experimental period 10, 20, 30 and 30 rabbits from the treatments of 6, 12, 18 and 24 rabbits/m2, respectively, were slaughtered and carcass performance recorded. Average daily gain and feed intake decreased by 0.31 ± 0.070 and 1.20 ± 0.25 g, respectively, per each unit that the density increased at the beginning of the experiment (P = 0.001). It increased the length of the fattening period by 0.91 ± 0.16 d (P = 0.001) per each unit of increment of density. However, rabbit production (kg/m2) increased linear and quadratically with the density (P < 0.008). Animals housed at the highest density compared to the lower one tended to show a higher incidence of ringworm (68.9 vs 39.4%; P = 0.075), injured animals (16.8 vs 3.03%; P = 0.12) and mortality (20.5 vs 9.63%; P = 0.043). The proportion of scapular fat (P = 0.042) increased linearly with increasing levels of density. Increasing density reduced linearly dorsal length (P = 0.001), and reduced linear and quadratically drip loss percentage (P = 0.097 and 0.018, respectively). In the second trial, 46 nulliparous rabbit does (23 clipped and 23 unclipped) with a BW of 3.67 ± 0.05 kg (s.e.) were used to evaluate heat stress and circadian rhythms comparing unclipped and clipped rabbit does, and to study if a more extensive breeding system increase litters performance at weaning without impairing rabbit doe performance,. Rectal temperature, feed and water 4 intake were recorded for 24 h. Rabbit does were mated 7 d after circadian measurements, and randomly assigned to two breeding systems. Control (C): mated at 14 d after parturition + litter weaned at 35 d of age. Extensive (E): mate at 21 after parturition + litter weaned at 42 d of age. The first three cycles were evaluated concerning to rabbit doe and litter performance. Two hundred twenty eight weaned rabbits, were divided into two cage sizes: 0.5 and 0.25 m2 with same density (16 rabbit/m2) and growing performance was recorded. Farm and rectal temperatures were minimal and feed and water intake maximal during the night (P < 0.001). Unclipped rabbit does showed higher rectal temperature (P = 0.045) and lower feed intake respect to clipped does (P = 0.019) which suggest a lower heat stress in the latter. Kits weaned per litter was reduced by 33% (P=0.038) in C group. This reduction was more important in the 2nd and 3rd cycles compared to the first (P ≤ 0.054). Rabbit doe feed efficiency tended to decrease in E respect C group (P = 0.093), whereas it was impaired from the first to the third cycle by 48% (P = 0.014). Growing rabbits from the E group were heavier at weaning (by 38%. P < 0.001), showed a higher feed intake (+7.4%) and lower feed efficiency (-8.4%) throughout the fattening period (P ≤ 0.056) respect to C group. Cage size had minor influence in growing performance. In the third trial, forty five non pregnant and non lactating rabbit does (21 nulliparous and 24 multiparous) were assigned randomly to farm water and to potable water to study if a water quality improvement can affect positively rabbit doe response to heat stress during pregnancy and lactation. A transponder was implanted in each animal to record subcutaneous temperature at 07:30 and 14:30 h. Experimental period extended from pregnancy (with no lactation) to the next lactation (until day 28). Body temperature and milk production were recorded daily, and body condition, feed and water intake weekly. Water quality did not affect any trait (P ≥ 0.15). Pregnant rabbit does were classified as does that weaned (W: 47%), not weaned (NW: 44%) or those pregnant that did not deliver (NB: 9%). Body temperature and feed intake decreased during pregnancy (P ≤ 0.031), but water intake remained constant. In this period body temperature decreased with metabolic weight (P ≤ 0.009). In W and NW does, 5 from mating to birth energy and protein balance impaired (P≤0.011). Body temperature of W does tended to be the lowest (P ≤ 0.090). Pregnancy length and total number of kits born tended to be longer and higher in NW than in W does (P = 0.10 and 0.053, respectively). Kit mortality at birth and from birth to 14 d of lactation was high, being worse for NW than for W does (97 vs. 40%; P<0.001). Body temperature during lactation was maximal at day 12, and milk production increased it (P ≤ 0.025). . In conclusion, in our heat stress conditions densities higher than 18 rabbits/m2 (34 kg/m2) at the end of fattening, are not recommended despite cage size, gestation and lactation productivity impaired not only when lactation is extended and along successive reproductive cycles but also due to a reduced embryo/kit survival and finally water quality improvement did not attenuate negative effect of heat stress. RESUMEN El propósito de éste trabajo fue evaluar diferentes estrategias de manejo para optimizar la producción de conejos bajo estrés térmico. Para lo cual se desarrollaron tres experimentos. En el primer experimento, para encontrar el número óptimo de gazapos por m2 de jaula durante el cebo en condiciones de bosque muy seco tropical, se estudiaron los rendimientos durante el cebo, mortalidad, animales lesionados y rendimiento de la canal sobre una población inicial de 300 conejos mestizos de Nueva Zelanda, California, Mariposa, Holandés y Satin, destetados a los 30 días de edad (535 ± 8g, error estándar). Los tratamientos evaluados fueron: 6, 12, 18 y 24 conejos/m2 (3, 6, 9 y 12 conejos/jaula, respectivamente, en jaulas de 0.5 m2). Durante el período experimental (destete a 2.2 kg de peso vivo), se observaron valores de THI correspondientes con un estrés térmico severo (THI max. De 31 a 35). Al final del período experimental, 10, 20, 30, y 30 conejos de los tratamientos con densidades de 6, 12, 18 y 24 conejos/m2, respectivamente, fueron sacrificados y su canal fue valorada. El promedio de la ganancia diaria y el consumo de alimento disminuyeron en 0.31 ± 0.070 y 1.20 ± 0.25 g, respectivamente, por cada unidad de incremento en la densidad al inicio del experimento (P=0.001). Esto alargó el período de engorde en 0.91 ± 0.16 d (P=0.001) por cada unidad de incremento de la densidad. Sin embargo, la producción de conejos (kg/m2) aumentó lineal y cuadráticamente con la densidad (P<0.008). Los animales alojados en las mayores densidades en comparación con el resto tendieron a mostrar una mayore incidencia de tiña (68.9 vs 39.4%; P=0.075), de cantidad de animales heridos (16.8 vs 3.03%; P=0.12), así como de mortalidad (20.5 vs 9.63%; P=0.043). El aumento en la densidad aumentó linealmente la proporción de grasa escapular (P=0.042) y redujo linealmente la longitud dorsal (P=0.001), y lineal y cuadráticamente el porcentaje de pérdida por goteo (P=0.018). En el segundo experimento, 46 conejas nulliparas (23 rasuradas y 23 no rasuradas) con un peso vivo de 3.67 ± 0.05 kg (e.e.) fueron usadas para evaluar el estrés 8 térmico y los ritmos circadianos comparando conejas rasuradas o no, y estudiar si un sistema de crianza más extensivo mejora el desempeño de la camada al destete sin perjudicar la productividad de la coneja. Durante 24 h se midió la temperatura rectal, consumo de alimento y de agua. Las conejas fueron montadas 7 días después, y distribuidas en dos sistemas de crianza. El control (C): monta a 14 días posparto y destete a 35 d de edad. El extensivo (E): monta a 21 días posparto y destete a 42 d de edad. Se controló la productividad de la coneja y la camada durante los tres primeros ciclos. Doscientos veintiocho gazapos fueron distribuidos en dos tamaños de jaulas (0.5 y 0.25 m2) con la misma densidad (16 conejos/m2) y se controlaron sus rendimientos productivos. Durante la noche se observaron los valores mínimos para la temperatura ambiental y rectal, y los máximos para consumo de alimento y agua (P< 0.001). Las conejas no rasuradas mostraron mayor temperatura rectal (P=0.045) y menores valores de consumo de alimento con respecto a las conejas rasuradas (P=0.019), lo que sugiere un menor estrés térmico en las últimas. El número de gazapos destetados por camada se redujo en 33% (P=0.038) en el grupo C. Este comportamiento se acentuó en el 2do y 3er ciclo en comparación con el primero (P≤0.054). La eficiencia alimenticia de las conejas tendió a disminuir en el grupo E con respecto al grupo C (P=0.093), dicha tendencia se acentúa del primer al tercer ciclo en un 48% (P=0.014). Los gazapos en fase de crecimiento provenientes del grupo E fueron más pesados al momento del destete (en 38% P<0.001), mostrando un mayor consumo de alimento (+7.4%) y menor eficiencia alimenticia (-8.4%) a lo largo del engorde (P≤0.056) con respecto al grupo C. El tamaño de la jaula tuvo una mínima influencia en el comportamiento durante el crecimiento de éstos gazapos. En el tercer experimento, cuarenta y cinco conejas no gestantes ni lactantes (21 nulíparas y 24 multíparas) se les asignó al azar agua dos tipos de agua: común de la granja y agua potable, con el fin de estudiar si una mejora en la calidad del agua puede afectar positivamente la respuesta de la coneja al estrés térmico durante la gestación y la lactancia. Se les implantó un transponder para registrar la temperatura subcutánea a las 7:30 y a las 14:30 h. El período experimental se extendió desde la gestación (sin 9 lactancia) hasta la lactanción consecutiva (hasta los 28 días). La temperatura corporal y la producción de leche se controlaron diariamente, y la condición corporal, consumo de agua y alimento, semanalmente. La calidad del agua no afectó a ninguna variable (P≥0.15). Las conejas preñadas fueron clasificadas como conejas que destetaron (W: 47%), que no destetaron (NW:44%) o aquellas que no parieron (NB: 9%). La temperatura corporal y consumo de alimento disminuyeron durante la gestación (P≤0.031), mientras que el consumo de agua se mantuvo constante. La temperatura corporal descendió con el peso metabólico durante la gestación (P≤0.009). El balance de energía y proteína disminuyó desde la monta al parto para las conejas W y NW (P≤0.011). Durante la gestación la temperatura corporal tendió a ser menor en las conejas W (P≤0.090). La longitud de la gestación y el número total de gazapos nacidos tendieron a ser mayores en conejas NW que en conejas W (P=0.10 y 0.053, respectivamente). La mortalidad de los gazapos al parto y del parto a los 14 días de lactancia fue alta, siendo peor para las conejas NW que para las W (97 vs 40%; P<0.001). Durante la lactancia la temperatura corporal alcanzó su valor máximo para el día 12, y la producción de leche indujo un incremento en la misma (P≤0.025). En conclusión, en nuestras condiciones de estrés térmico y sin importar el tamaño de la jaula, no se recomiendan densidades mayores a 18 conejos/m2 (34 kg/m2) al final del engorde. La productividad de la gestación y la lactancia disminuyen cuando la lactancia es mayor y se suceden varios ciclos reproductivos seguidos. Esto se debe al efecto negativo del estrés térmico sobre la vitalidad y supervivencia del embrión/gazapo. La mejora de la calidad del agua atenuó el efecto negativo del estrés térmico. Las conejas más productoras parece que son aquéllas que consiguen manejar mejor el estrés térmico.
Resumo:
As the use of recommender systems becomes more consolidated on the Net, an increasing need arises to develop some kind of evaluation framework for collaborative filtering measures and methods which is capable of not only testing the prediction and recommendation results, but also of other purposes which until now were considered secondary, such as novelty in the recommendations and the users? trust in these. This paper provides: (a) measures to evaluate the novelty of the users? recommendations and trust in their neighborhoods, (b) equations that formalize and unify the collaborative filtering process and its evaluation, (c) a framework based on the above-mentioned elements that enables the evaluation of the quality results of any collaborative filtering applied to the desired recommender systems, using four graphs: quality of the predictions, the recommendations, the novelty and the trust.
Resumo:
Software testing is a key aspect of software reliability and quality assurance in a context where software development constantly has to overcome mammoth challenges in a continuously changing environment. One of the characteristics of software testing is that it has a large intellectual capital component and can thus benefit from the use of the experience gained from past projects. Software testing can, then, potentially benefit from solutions provided by the knowledge management discipline. There are in fact a number of proposals concerning effective knowledge management related to several software engineering processes. Objective: We defend the use of a lesson learned system for software testing. The reason is that such a system is an effective knowledge management resource enabling testers and managers to take advantage of the experience locked away in the brains of the testers. To do this, the experience has to be gathered, disseminated and reused. Method: After analyzing the proposals for managing software testing experience, significant weaknesses have been detected in the current systems of this type. The architectural model proposed here for lesson learned systems is designed to try to avoid these weaknesses. This model (i) defines the structure of the software testing lessons learned; (ii) sets up procedures for lesson learned management; and (iii) supports the design of software tools to manage the lessons learned. Results: A different approach, based on the management of the lessons learned that software testing engineers gather from everyday experience, with two basic goals: usefulness and applicability. Conclusion: The architectural model proposed here lays the groundwork to overcome the obstacles to sharing and reusing experience gained in the software testing and test management. As such, it provides guidance for developing software testing lesson learned systems.
Resumo:
There are many industries that use highly technological solutions to improve quality in all of their products. The steel industry is one example. Several automatic surface-inspection systems are used in the steel industry to identify various types of defects and to help operators decide whether to accept, reroute, or downgrade the material, subject to the assessment process. This paper focuses on promoting a strategy that considers all defects in an integrated fashion. It does this by managing the uncertainty about the exact position of a defect due to different process conditions by means of Gaussian additive influence functions. The relevance of the approach is in making possible consistency and reliability between surface inspection systems. The results obtained are an increase in confidence in the automatic inspection system and an ability to introduce improved prediction and advanced routing models. The prediction is provided to technical operators to help them in their decision-making process. It shows the increase in improvement gained by reducing the 40 % of coils that are downgraded at the hot strip mill because of specific defects. In addition, this technology facilitates an increase of 50 % in the accuracy of the estimate of defect survival after the cleaning facility in comparison to the former approach. The proposed technology is implemented by means of software-based, multi-agent solutions. It makes possible the independent treatment of information, presentation, quality analysis, and other relevant functions.
Resumo:
The aim of this study was to evaluate the sustainability of farm irrigation systems in the Cébalat district in northern Tunisia. It addressed the challenging topic of sustainable agriculture through a bio-economic approach linking a biophysical model to an economic optimisation model. A crop growth simulation model (CropSyst) was used to build a database to determine the relationships between agricultural practices, crop yields and environmental effects (salt accumulation in soil and leaching of nitrates) in a context of high climatic variability. The database was then fed into a recursive stochastic model set for a 10-year plan that allowed analysing the effects of cropping patterns on farm income, salt accumulation and nitrate leaching. We assumed that the long-term sustainability of soil productivity might be in conflict with farm profitability in the short-term. Assuming a discount rate of 10% (for the base scenario), the model closely reproduced the current system and allowed to predict the degradation of soil quality due to long-term salt accumulation. The results showed that there was more accumulation of salt in the soil for the base scenario than for the alternative scenario (discount rate of 0%). This result was induced by applying a higher quantity of water per hectare for the alternative as compared to a base scenario. The results also showed that nitrogen leaching is very low for the two discount rates and all climate scenarios. In conclusion, the results show that the difference in farm income between the alternative and base scenarios increases over time to attain 45% after 10 years.
Resumo:
Current trends in the fields of artifical intelligence and expert systems are moving towards the exciting possibility of reproducing and simulating human expertise and expert behaviour into a knowledge base, coupled with an appropriate, partially ‘intelligent’, computer code. This paper deals with the quality level prediction in concrete structures using the helpful assistance of an expert system, QL-CONST1, which is able to reason about this specific field of structural engineering. Evidence, hypotheses and factors related to this human knowledge field have been codified into a knowledge base. This knowledge base has been prepared in terms of probabilities of the presence of either hypotheses or evidence and the conditional presence of both. Human experts in the fields of structural engineering and the safety of structures gave their invaluable knowledge and assistance to the construction of the knowledge base. Some illustrative examples for, the validation of the expert system behaviour are included.