909 resultados para Lead-time and set-up optimization


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background and aims Eucalyptus plantations cover 20 million hectares on highly weathered soils. Large amounts of nitrogen (N) exported during harvesting lead to concerns about their sustainability. Our goal was to assess the potential of introducing A. mangium trees in highly productive Eucalyptus plantations to enhance soil organic matter stocks and N availability. Methods A randomized block design was set up in a Brazilian Ferralsol soil to assess the effects of mono-specific Eucalyptus grandis (100E) and Acacia mangium (100A) stands and mixed plantations (50A:50E)on soil organic matter stocks and net N mineralization. Results A 6-year rotation of mono-specific A. mangium plantations led to carbon (C) and N stocks in the forest floor that were 44% lower and 86% higher than in pure E. grandis stands, respectively. Carbon and N stocks were not significantly different between the three treatments in the 0-15 cm soil layer. Field incubations conducted every 4 weeks for the two last years of the rotation estimated net soil N mineralization in 100A and 100E at 124 and 64 kg ha(-1) yr(-1), respectively. Nitrogen inputs to soil with litterfall were of the same order as net N mineralization. Conclusions Acacia mangium trees largely increased the turnover rate of N in the topsoil. Introducing A. mangium trees might improve mineral N availability in soils where commercial Eucalyptus plantations have been managed for a long time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nuclear Magnetic Resonance (NMR) is a branch of spectroscopy that is based on the fact that many atomic nuclei may be oriented by a strong magnetic field and will absorb radiofrequency radiation at characteristic frequencies. The parameters that can be measured on the resulting spectral lines (line positions, intensities, line widths, multiplicities and transients in time-dependent experi-ments) can be interpreted in terms of molecular structure, conformation, molecular motion and other rate processes. In this way, high resolution (HR) NMR allows performing qualitative and quantitative analysis of samples in solution, in order to determine the structure of molecules in solution and not only. In the past, high-field NMR spectroscopy has mainly concerned with the elucidation of chemical structure in solution, but today is emerging as a powerful exploratory tool for probing biochemical and physical processes. It represents a versatile tool for the analysis of foods. In literature many NMR studies have been reported on different type of food such as wine, olive oil, coffee, fruit juices, milk, meat, egg, starch granules, flour, etc using different NMR techniques. Traditionally, univariate analytical methods have been used to ex-plore spectroscopic data. This method is useful to measure or to se-lect a single descriptive variable from the whole spectrum and , at the end, only this variable is analyzed. This univariate methods ap-proach, applied to HR-NMR data, lead to different problems due especially to the complexity of an NMR spectrum. In fact, the lat-ter is composed of different signals belonging to different mole-cules, but it is also true that the same molecules can be represented by different signals, generally strongly correlated. The univariate methods, in this case, takes in account only one or a few variables, causing a loss of information. Thus, when dealing with complex samples like foodstuff, univariate analysis of spectra data results not enough powerful. Spectra need to be considered in their wholeness and, for analysing them, it must be taken in consideration the whole data matrix: chemometric methods are designed to treat such multivariate data. Multivariate data analysis is used for a number of distinct, differ-ent purposes and the aims can be divided into three main groups: • data description (explorative data structure modelling of any ge-neric n-dimensional data matrix, PCA for example); • regression and prediction (PLS); • classification and prediction of class belongings for new samples (LDA and PLS-DA and ECVA). The aim of this PhD thesis was to verify the possibility of identify-ing and classifying plants or foodstuffs, in different classes, based on the concerted variation in metabolite levels, detected by NMR spectra and using the multivariate data analysis as a tool to inter-pret NMR information. It is important to underline that the results obtained are useful to point out the metabolic consequences of a specific modification on foodstuffs, avoiding the use of a targeted analysis for the different metabolites. The data analysis is performed by applying chemomet-ric multivariate techniques to the NMR dataset of spectra acquired. The research work presented in this thesis is the result of a three years PhD study. This thesis reports the main results obtained from these two main activities: A1) Evaluation of a data pre-processing system in order to mini-mize unwanted sources of variations, due to different instrumental set up, manual spectra processing and to sample preparations arte-facts; A2) Application of multivariate chemiometric models in data analy-sis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The stabilization of nanoparticles against their irreversible particle aggregation and oxidation reactions. is a requirement for further advancement in nanoparticle science and technology. For this reason the research aim on this topic focuses on the synthesis of various metal nanoparticles protected with monolayers containing different reactive head groups and functional tail groups. In this work cuprous bromide nanocrystals haave been synthetized with a diameter of about 20 nanometers according to a new sybthetic method adding dropwise ascorbic acid to a water solution of lithium bromide and cupric chloride under continuous stirring and nitrogen flux. Butane thiolate Cu protected nanoparticles have been synthetized according to three different syntesys methods. Their morphologies appear related to the physicochemical conditions during the synthesis and to the dispersing medium used to prepare the sample. Synthesis method II allows to obtain stable nanoparticles of 1-2 nm in size both isolated and forming clusters. Nanoparticle cluster formation was enhanced as water was used as dispersing medium probably due to the idrophobic nature of the butanethiolate layers coating the nanoparticle surface. Synthesis methods I and III lead to large unstable spherical nanoparticles with size ranging between 20 to 50 nm. These nanoparticles appeared in the TEM micrograph with the same morphology independently on the dispersing medium used in the sample preparation. The stability and dimensions of the copper nanoparticles appear inversely related. Using the same methods above described for the butanethiolate protected copper nanoparticles 4-methylbenzenethiol protected copper nanoparticles have been prepared. Diffractometric and spectroscopic data reveal that decomposition processes didn’t occur in both the 4-methylbenzenethiol copper protected nanoparticles precipitates from formic acid and from water in a period of time six month long. Se anticarcinogenic effects by multiple mechanisms have been extensively investigated and documented and Se is defined a genuine nutritional cancer-protecting element and a significant protective effect of Se against major forms of cancer. Furthermore phloroglucinol was found to possess cytoprotective effects against oxidative stress, thanks to reactive oxygen species (ROS) which are associated with cells and tissue damages and are the contributing factors for inflammation, aging, cancer, arteriosclerosis, hypertension and diabetes. The goal of our work has been to set up a new method to synthesize in mild conditions amorphous Se nanopaticles surface capped with phloroglucinol, which is used during synthesis as reducing agent to obtain stable Se nanoparticles in ethanol, performing the synergies offered by the specific anticarcinogenic properties of Se and the antioxiding ones of phloroalucinol. We have synthesized selenium nanoparticles protected by phenolic molecules chemically bonded to their surface. The phenol molecules coating the nanoparticles surfaces form low ordered arrays as can be seen from the wider shape of the absorptions in the FT-IR spectrum with respect to those appearing in that of crystalline phenol. On the other hand, metallic nanoparticles with unique optical properties, facile surface chemistry and appropriate size scale are generating much enthusiasm in nanomedicine. In fact Au nanoparticles has immense potential for both cancer diagnosis and therapy. Especially Au nanoparticles efficiently convert the strongly adsorbed light into localized heat, which can be exploited for the selective laser photothermal therapy of cancer. According to the about, metal nanoparticles-HA nanocrystals composites should have tremendous potential in novel methods for therapy of cancer. 11 mercaptoundecanoic surface protected Au4Ag1 nanoparticles adsorbed on nanometric apathyte crystals we have successfully prepared like an anticancer nanoparticles deliver system utilizing biomimetic hydroxyapatyte nanocrystals as deliver agents. Furthermore natural chrysotile, formed by densely packed bundles of multiwalled hollow nanotubes, is a mineral very suitable for nanowires preparation when their inner nanometer-sized cavity is filled with a proper material. Bundles of chrysotile nanotubes can then behave as host systems, where their large interchannel separation is actually expected to prevent the interaction between individual guest metallic nanoparticles and act as a confining barrier. Chrysotile nanotubes have been filled with molten metals such as Hg, Pb, Sn, semimetals, Bi, Te, Se, and with semiconductor materials such as InSb, CdSe, GaAs, and InP using both high-pressure techniques and metal-organic chemical vapor deposition. Under hydrothermal conditions chrysotile nanocrystals have been synthesized as a single phase and can be utilized as a very suitable for nanowires preparation filling their inner nanometer-sized cavity with metallic nanoparticles. In this research work we have synthesized and characterized Stoichiometric synthetic chrysotile nanotubes have been partially filled with bi and monometallic highly monodispersed nanoparticles with diameters ranging from 1,7 to 5,5 nm depending on the core composition (Au, Au4Ag1, Au1Ag4, Ag). In the case of 4 methylbenzenethiol protected silver nanoparticles, the filling was carried out by convection and capillarity effect at room temperature and pressure using a suitable organic solvent. We have obtained new interesting nanowires constituted of metallic nanoparticles filled in inorganic nanotubes with a inner cavity of 7 nm and an isolating wall with a thick ranging from 7 to 21 nm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The subject of the presented thesis is the accurate measurement of time dilation, aiming at a quantitative test of special relativity. By means of laser spectroscopy, the relativistic Doppler shifts of a clock transition in the metastable triplet spectrum of ^7Li^+ are simultaneously measured with and against the direction of motion of the ions. By employing saturation or optical double resonance spectroscopy, the Doppler broadening as caused by the ions' velocity distribution is eliminated. From these shifts both time dilation as well as the ion velocity can be extracted with high accuracy allowing for a test of the predictions of special relativity. A diode laser and a frequency-doubled titanium sapphire laser were set up for antiparallel and parallel excitation of the ions, respectively. To achieve a robust control of the laser frequencies required for the beam times, a redundant system of frequency standards consisting of a rubidium spectrometer, an iodine spectrometer, and a frequency comb was developed. At the experimental section of the ESR, an automated laser beam guiding system for exact control of polarisation, beam profile, and overlap with the ion beam, as well as a fluorescence detection system were built up. During the first experiments, the production, acceleration and lifetime of the metastable ions at the GSI heavy ion facility were investigated for the first time. The characterisation of the ion beam allowed for the first time to measure its velocity directly via the Doppler effect, which resulted in a new improved calibration of the electron cooler. In the following step the first sub-Doppler spectroscopy signals from an ion beam at 33.8 %c could be recorded. The unprecedented accuracy in such experiments allowed to derive a new upper bound for possible higher-order deviations from special relativity. Moreover future measurements with the experimental setup developed in this thesis have the potential to improve the sensitivity to low-order deviations by at least one order of magnitude compared to previous experiments; and will thus lead to a further contribution to the test of the standard model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Single-screw extrusion is one of the widely used processing methods in plastics industry, which was the third largest manufacturing industry in the United States in 2007 [5]. In order to optimize the single-screw extrusion process, tremendous efforts have been devoted for development of accurate models in the last fifty years, especially for polymer melting in screw extruders. This has led to a good qualitative understanding of the melting process; however, quantitative predictions of melting from various models often have a large error in comparison to the experimental data. Thus, even nowadays, process parameters and the geometry of the extruder channel for the single-screw extrusion are determined by trial and error. Since new polymers are developed frequently, finding the optimum parameters to extrude these polymers by trial and error is costly and time consuming. In order to reduce the time and experimental work required for optimizing the process parameters and the geometry of the extruder channel for a given polymer, the main goal of this research was to perform a coordinated experimental and numerical investigation of melting in screw extrusion. In this work, a full three-dimensional finite element simulation of the two-phase flow in the melting and metering zones of a single-screw extruder was performed by solving the conservation equations for mass, momentum, and energy. The only attempt for such a three-dimensional simulation of melting in screw extruder was more than twenty years back. However, that work had only a limited success because of the capability of computers and mathematical algorithms available at that time. The dramatic improvement of computational power and mathematical knowledge now make it possible to run full 3-D simulations of two-phase flow in single-screw extruders on a desktop PC. In order to verify the numerical predictions from the full 3-D simulations of two-phase flow in single-screw extruders, a detailed experimental study was performed. This experimental study included Maddock screw-freezing experiments, Screw Simulator experiments and material characterization experiments. Maddock screw-freezing experiments were performed in order to visualize the melting profile along the single-screw extruder channel with different screw geometry configurations. These melting profiles were compared with the simulation results. Screw Simulator experiments were performed to collect the shear stress and melting flux data for various polymers. Cone and plate viscometer experiments were performed to obtain the shear viscosity data which is needed in the simulations. An optimization code was developed to optimize two screw geometry parameters, namely, screw lead (pitch) and depth in the metering section of a single-screw extruder, such that the output rate of the extruder was maximized without exceeding the maximum temperature value specified at the exit of the extruder. This optimization code used a mesh partitioning technique in order to obtain the flow domain. The simulations in this flow domain was performed using the code developed to simulate the two-phase flow in single-screw extruders.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A range of societal issues have been caused by fossil fuel consumption in the transportation sector in the United States (U.S.), including health related air pollution, climate change, the dependence on imported oil, and other oil related national security concerns. Biofuels production from various lignocellulosic biomass types such as wood, forest residues, and agriculture residues have the potential to replace a substantial portion of the total fossil fuel consumption. This research focuses on locating biofuel facilities and designing the biofuel supply chain to minimize the overall cost. For this purpose an integrated methodology was proposed by combining the GIS technology with simulation and optimization modeling methods. The GIS based methodology was used as a precursor for selecting biofuel facility locations by employing a series of decision factors. The resulted candidate sites for biofuel production served as inputs for simulation and optimization modeling. As a precursor to simulation or optimization modeling, the GIS-based methodology was used to preselect potential biofuel facility locations for biofuel production from forest biomass. Candidate locations were selected based on a set of evaluation criteria, including: county boundaries, a railroad transportation network, a state/federal road transportation network, water body (rivers, lakes, etc.) dispersion, city and village dispersion, a population census, biomass production, and no co-location with co-fired power plants. The simulation and optimization models were built around key supply activities including biomass harvesting/forwarding, transportation and storage. The built onsite storage served for spring breakup period where road restrictions were in place and truck transportation on certain roads was limited. Both models were evaluated using multiple performance indicators, including cost (consisting of the delivered feedstock cost, and inventory holding cost), energy consumption, and GHG emissions. The impact of energy consumption and GHG emissions were expressed in monetary terms to keep consistent with cost. Compared with the optimization model, the simulation model represents a more dynamic look at a 20-year operation by considering the impacts associated with building inventory at the biorefinery to address the limited availability of biomass feedstock during the spring breakup period. The number of trucks required per day was estimated and the inventory level all year around was tracked. Through the exchange of information across different procedures (harvesting, transportation, and biomass feedstock processing procedures), a smooth flow of biomass from harvesting areas to a biofuel facility was implemented. The optimization model was developed to address issues related to locating multiple biofuel facilities simultaneously. The size of the potential biofuel facility is set up with an upper bound of 50 MGY and a lower bound of 30 MGY. The optimization model is a static, Mathematical Programming Language (MPL)-based application which allows for sensitivity analysis by changing inputs to evaluate different scenarios. It was found that annual biofuel demand and biomass availability impacts the optimal results of biofuel facility locations and sizes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND The majority of radiological reports are lacking a standard structure. Even within a specialized area of radiology, each report has its individual structure with regards to details and order, often containing too much of non-relevant information the referring physician is not interested in. For gathering relevant clinical key parameters in an efficient way or to support long-term therapy monitoring, structured reporting might be advantageous. OBJECTIVE Despite of new technologies in medical information systems, medical reporting is still not dynamic. To improve the quality of communication in radiology reports, a new structured reporting system was developed for abdominal aortic aneurysms (AAA), intended to enhance professional communication by providing the pertinent clinical information in a predefined standard. METHODS Actual state analysis was performed within the departments of radiology and vascular surgery by developing a Technology Acceptance Model. The SWOT (strengths, weaknesses, opportunities, and threats) analysis focused on optimization of the radiology reporting of patients with AAA. Definition of clinical parameters was achieved by interviewing experienced clinicians in radiology and vascular surgery. For evaluation, a focus group (4 radiologists) looked at the reports of 16 patients. The usability and reliability of the method was validated in a real-world test environment in the field of radiology. RESULTS A Web-based application for radiological "structured reporting" (SR) was successfully standardized for AAA. Its organization comprises three main categories: characteristics of pathology and adjacent anatomy, measurements, and additional findings. Using different graphical widgets (eg, drop-down menus) in each category facilitate predefined data entries. Measurement parameters shown in a diagram can be defined for clinical monitoring and be adducted for quick adjudications. Figures for optional use to guide and standardize the reporting are embedded. Analysis of variance shows decreased average time required with SR to obtain a radiological report compared to free-text reporting (P=.0001). Questionnaire responses confirm a high acceptance rate by the user. CONCLUSIONS The new SR system may support efficient radiological reporting for initial diagnosis and follow-up for AAA. Perceived advantages of our SR platform are ease of use, which may lead to more accurate decision support. The new system is open to communicate not only with clinical partners but also with Radiology Information and Hospital Information Systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The influence of respiratory motion on patient anatomy poses a challenge to accurate radiation therapy, especially in lung cancer treatment. Modern radiation therapy planning uses models of tumor respiratory motion to account for target motion in targeting. The tumor motion model can be verified on a per-treatment session basis with four-dimensional cone-beam computed tomography (4D-CBCT), which acquires an image set of the dynamic target throughout the respiratory cycle during the therapy session. 4D-CBCT is undersampled if the scan time is too short. However, short scan time is desirable in clinical practice to reduce patient setup time. This dissertation presents the design and optimization of 4D-CBCT to reduce the impact of undersampling artifacts with short scan times. This work measures the impact of undersampling artifacts on the accuracy of target motion measurement under different sampling conditions and for various object sizes and motions. The results provide a minimum scan time such that the target tracking error is less than a specified tolerance. This work also presents new image reconstruction algorithms for reducing undersampling artifacts in undersampled datasets by taking advantage of the assumption that the relevant motion of interest is contained within a volume-of-interest (VOI). It is shown that the VOI-based reconstruction provides more accurate image intensity than standard reconstruction. The VOI-based reconstruction produced 43% fewer least-squares error inside the VOI and 84% fewer error throughout the image in a study designed to simulate target motion. The VOI-based reconstruction approach can reduce acquisition time and improve image quality in 4D-CBCT.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes seagrass species and percentage cover point-based field data sets derived from georeferenced photo transects. Annually or biannually over a ten year period (2004-2015) data sets were collected using 30-50 transects, 500-800 m in length distributed across a 142 km**2 shallow, clear water seagrass habitat, the Eastern Banks, Moreton Bay, Australia. Each of the eight data sets include seagrass property information derived from approximately 3000 georeferenced, downward looking photographs captured at 2-4 m intervals along the transects. Photographs were manually interpreted to estimate seagrass species composition and percentage cover (Coral Point Count excel; CPCe). Understanding seagrass biology, ecology and dynamics for scientific and management purposes requires point-based data on species composition and cover. This data set, and the methods used to derive it are a globally unique example for seagrass ecological applications. It provides the basis for multiple further studies at this site, regional to global comparative studies, and, for the design of similar monitoring programs elsewhere.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we describe a complete development platform that features different innovative acceleration strategies, not included in any other current platform, that simplify and speed up the definition of the different elements required to design a spoken dialog service. The proposed accelerations are mainly based on using the information from the backend database schema and contents, as well as cumulative information produced throughout the different steps in the design. Thanks to these accelerations, the interaction between the designer and the platform is improved, and in most cases the design is reduced to simple confirmations of the “proposals” that the platform dynamically provides at each step. In addition, the platform provides several other accelerations such as configurable templates that can be used to define the different tasks in the service or the dialogs to obtain or show information to the user, automatic proposals for the best way to request slot contents from the user (i.e. using mixed-initiative forms or directed forms), an assistant that offers the set of more probable actions required to complete the definition of the different tasks in the application, or another assistant for solving specific modality details such as confirmations of user answers or how to present them the lists of retrieved results after querying the backend database. Additionally, the platform also allows the creation of speech grammars and prompts, database access functions, and the possibility of using mixed initiative and over-answering dialogs. In the paper we also describe in detail each assistant in the platform, emphasizing the different kind of methodologies followed to facilitate the design process at each one. Finally, we describe the results obtained in both a subjective and an objective evaluation with different designers that confirm the viability, usefulness, and functionality of the proposed accelerations. Thanks to the accelerations, the design time is reduced in more than 56% and the number of keystrokes by 84%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los cambios percibidos hacia finales del siglo XX y a principios del nuevo milenio, nos ha mostrado que la crisis cultural de la que somos participes refleja también una crisis de los modelos universales. Nuestra situación contemporánea, parece indicar que ya no es posible formular un sistema estético para atribuirle una vigencia universal e intemporal más allá de su estricta eficacia puntual. La referencia organizada, delimitada, invariable y específica que ofrecía cualquier emplazamiento, en tanto preexistencia, reflejaba una jerarquía del sistema formal basado en lo extensivo: la medida, las normas, el movimiento, el tiempo, la modulación, los códigos y las reglas. Sin embargo, actualmente, algunos aspectos que permanecían latentes sobre lo construido, emergen bajo connotaciones intensivas, transgrediendo la simple manifestación visual y expresiva, para centrase en las propiedades del comportamiento de la materia y la energía como determinantes de un proceso de adaptación en el entorno. A lo largo del todo el siglo XX, el desarrollo de la relación del proyecto sobre lo construido ha sido abordado, casi en exclusiva, entre acciones de preservación o intervención. Ambas perspectivas, manifestaban esfuerzos por articular un pensamiento que diera una consistencia teórica, como soporte para la producción de la acción aditiva. No obstante, en las últimas décadas de finales de siglo, la teoría arquitectónica terminó por incluir pensamientos de otros campos que parecen contaminar la visión sesgada que nos refería lo construido. Todo este entramado conceptual previo, aglomeraba valiosos intentos por dar contenido a una teoría que pudiese ser entendida desde una sola posición argumental. Es así, que en 1979 Ignasi Solá-Morales integró todas las imprecisiones que referían una actuación sobre una arquitectura existente, bajo el termino de “intervención”, el cual fue argumentado en dos sentidos: El primero referido a cualquier tipo de actuación que se puede hacer en un edificio, desde la defensa, preservación, conservación, reutilización, y demás acciones. Se trata de un ámbito donde permanece latente el sentido de intensidad, como factor común de entendimiento de una misma acción. En segundo lugar, más restringido, la idea de intervención se erige como el acto crítico a las ideas anteriores. Ambos representan en definitiva, formas de interpretación de un nuevo discurso. “Una intervención, es tanto como intentar que el edificio vuelva a decir algo o lo diga en una determinada dirección”. A mediados de 1985, motivado por la corriente de revisión historiográfica y la preocupación del deterioro de los centros históricos que recorría toda Europa, Solá-Morales se propone reflexionar sobre “la relación” entre una intervención de nueva arquitectura y la arquitectura previamente existente. Relación condicionada estrictamente bajo consideraciones lingüísticas, a su entender, en sintonía con toda la producción arquitectónica de todo el siglo XX. Del Contraste a la Analogía, resumirá las transformaciones en la concepción discursiva de la intervención arquitectónica, como un fenómeno cambiante en función de los valores culturales, pero a su vez, mostrando una clara tendencia dialógica entres dos categorías formales: El Contraste, enfatizando las posibilidades de la novedad y la diferencia; y por otro lado la emergente Analogía, como una nueva sensibilidad de interpretación del edificio antiguo, donde la semejanza y la diversidad se manifiestan simultáneamente. El aporte reflexivo de los escritos de Solá-Morales podría ser definitivo, si en las últimas décadas antes del fin de siglo, no se hubiesen percibido ciertos cambios sobre la continuidad de la expresión lingüística que fomentaba la arquitectura, hacia una especie de hipertrofia figurativa. Entre muchos argumentos: La disolución de la consistencia compositiva y el estilo unitario, la incorporación volumétrica del proyecto como dispositivo reactivo, y el cambio de visión desde lo retrospectivo hacia lo prospectivo que sugiere la nueva conservación. En este contexto de desintegración, el proyecto, en tanto incorporación o añadido sobre un edificio construido, deja de ser considerado como un apéndice volumétrico subordinado por la reglas compositivas y formales de lo antiguo, para ser considerado como un organismo de orden reactivo, que produce en el soporte existente una alteración en su conformación estructural y sistémica. La extensión, antes espacial, se considera ahora una extensión sensorial y morfológica con la implementación de la tecnología y la hiper-información, pero a su vez, marcados por una fuerte tendencia de optimización energética en su rol operativo, ante el surgimiento del factor ecológico en la producción contemporánea. En una sociedad, como la nuestra, que se está modernizando intensamente, es difícil compartir una adecuada sintonía con las formas del pasado. Desde 1790, fecha de la primera convención francesa para la conservación de monumentos, la escala de lo que se pretende preservar es cada vez más ambiciosa, tanto es así, que al día de hoy el repertorio de lo que se conserva incluye prácticamente todas las tipologías del entorno construido. Para Koolhaas, el intervalo entre el objeto y el momento en el cual se decide su conservación se ha reducido, desde dos milenios en 1882 a unas décadas hoy en día. En breve este lapso desaparecerá, demostrando un cambio radical desde lo retrospectivo hacia lo prospectivo, es decir, que dentro de poco habrá que decidir que es lo que se conserva antes de construir. Solá-Morales, en su momento, distinguió la relación entre lo nuevo y lo antiguo, entre el contraste y la analogía. Hoy casi tres décadas después, el objetivo consiste en evaluar si el modelo de intervención arquitectónica sobre lo construido se ha mantenido desde entonces o si han aparecido nuevas formas de posicionamiento del proyecto sobre lo construido. Nuestro trabajo pretende demostrar el cambio de enfoque proyectual con la preexistencia y que éste tiene estrecha relación con la incorporación de nuevos conceptos, técnicas, herramientas y necesidades que imprimen el contexto cultural, producido por el cambio de siglo. Esta suposición nos orienta a establecer un paralelismo arquitectónico entre los modos de relación en que se manifiesta lo nuevo, entre una posición comúnmente asumida (Tópica), genérica y ortodoxa, fundamentada en lo visual y expresivo de las últimas décadas del siglo XX, y una realidad emergente (Heterotópica), extraordinaria y heterodoxa que estimula lo inmaterial y que parece emerger con creciente intensidad en el siglo XXI. Si a lo largo de todo el siglo XX, el proyecto de intervención arquitectónico, se debatía entre la continuidad y discontinuidad de las categorías formales marcadas por la expresión del edificio preexistente, la nueva intervención contemporánea, como dispositivo reactivo en el paisaje y en el territorio, demanda una absoluta continuidad, ya no visual, expresiva, ni funcional, sino una continuidad fisiológica de adaptación y cambio con la propia dinámica del territorio, bajo nuevas reglas de juego y desplegando planes y estrategias operativas (proyectivas) desde su propia lógica y contingencia. El objeto de esta investigación es determinar los nuevos modos de continuidad y las posibles lógicas de producción que se manifiestan dentro de la Intervención Arquitectónica, intentando superar lo aparente de su relación física y visual, como resultado de la incorporación del factor operativo desplegado por el nuevo dispositivo contemporáneo. Creemos que es acertado mantener la senda connotativa que marca la denominación intervención arquitectónica, por aglutinar conceptos y acercamientos teóricos previos que han ido evolucionando en el tiempo. Si bien el término adolece de mayor alcance operativo desde su formulación, una cualidad que infieren nuestras lógicas contemporáneas, podría ser la reformulación y consolidación de un concepto de intervención más idóneo con nuestros tiempos, anteponiendo un procedimiento lógico desde su propia necesidad y contingencia. Finalmente, nuestro planteamiento inicial aspira a constituir un nueva forma de reflexión que nos permita comprender las complejas implicaciones que infiere la nueva arquitectura sobre la preexistencia, motivada por las incorporación de factores externos al simple juicio formal y expresivo preponderante a finales del siglo XX. Del mismo modo, nuestro camino propuesto, como alternativa, permite proyectar posibles sendas de prospección, al considerar lo preexistente como un ámbito que abarca la totalidad del territorio con dinámicas emergentes de cambio, y con ellas, sus lógicas de intervención.Abstract The perceived changes towards the end of the XXth century and at the beginning of the new milennium have shown us that the cultural crisis in which we participate also reflects a crisis of the universal models. The difference between our contemporary situation and the typical situations of modern orthodoxy and post-modernistic fragmentation, seems to indicate that it is no longer possible to formulate a valid esthetic system, to assign a universal and eternal validity to it beyond its strictly punctual effectiveness; which is even subject to questioning because of the continuous transformations that take place in time and in the sensibility of the subject itself every time it takes over the place. The organised reference that any location offered, limited, invariable and specific, while pre-existing, reflected a hierarchy of the formal system based on the applicable: measure, standards, movement, time, modulation, codes and rules. Authors like Marshall Mc Luhan, Paul Virilio, or Marc Augé anticipated a reality where the conventional system already did not seem to respond to the new architectural requests in which information, speed, disappearance and the virtual had blurred the traditional limits of place; pre-existence did no longer possess a specific delimitation and, on the contrary, they expect to reach a global scale. Currently, some aspects that stayed latent relating to the constructed, surface from intensive connotations, transgressing the simple visual and expressive manifestation in order to focus on the traits of the behaviour of material and energy as determinants of a process of adaptation to the surroundings. Throughout the entire Century, the development of the relation of the project relating to the constructed has been addressed, almost exclusively, in preservational or interventianal actions. Both perspectives showed efforts in order to express a thought that would give a theoretical consistency as a base for the production of the additive action. Nevertheless, the last decades of the Century, architectural theory ended up including thoughts from other fields that seem to contaminate the biased vision 15 which the constructed related us. Ecology, planning, philosophy, global economy, etc, suggest new approaches to the construction of the contemporary city; but this time with a determined idea of change and continuous transformation, that enriches the panorama of thought and architectural practice, at the same time, according to some, it puts disciplinary specification at risk, given that there is no architecture without destruction, the constructed organism requires mutation in order to adjust to the change of shape. All of this previous conceptual framework gathered valuable intents to give importance to a theory that could be understood solely from an argumental position. Thusly, in 1979 Ignasi Solá-Morales integrated all of the imprecisions that referred to an action in existing architecture under the term of “Intervention”, which was explained in two ways: The first referring to any type of intervention that can be carried out in a building, regarding protection, conservation, reuse, etc. It is about a scope where the meaning of intensity stays latent as a common factor of the understanding of a single action. Secondly, more limitedly, the idea of intervention is established as the critical act to the other previous ideas such as restauration, conservation, reuse, etc. Both ultimately represent ways of interpretation of a new speech. “An intervention, is as much as trying to make the building say something again or that it be said in a certain direction”. Mid 1985, motivated by the current of historiographical revision and the concerns regarding the deterioration of historical centres that traversed Europe, Solá-Morales decides to reflect on “the relationship” between an intervention of the new architecture and the previously existing architecture. A relationship determined strictly by linguistic considerations, to his understanding, in harmony with all of the architectural production of the XXth century. From Contrast to Analogy would summarise transformations in the discursive perception of architectural intervention, as a changing phenomenon depending on cultural values, but at the same time, showing a clear dialogical tendency between two formal categories: Contrast, emphasising the possibilities of novelty and difference; and on the other hand the emerging Analogy, as a new awareness of interpretation of the ancient building, where the similarity and diversity are manifested simultaneously. For Solá-Morales the analogical procedure is not based on the visible simultaneity of formal orders, but on associations that the subject establishes throughout time. Through analogy it is tried to overcome the simple visual relationship with the antique, to focus on its spacial, physical and geographical nature. If the analogical attempt guides an opening towards a new continuity; it still persists in the connection of dimensional, typological and figurative factors, subordinate to the formal hierarchy of the preexisting subjects. 16 The reflexive contribution of Solá-Morales’ works could be final, if in the last decades before the end of the century there had not been certain changes regarding linguistic expression, encouraged by architecture, towards a kind of figurative hypertrophy, amongst many arguments we are in this case interested in three moments: The dissolution of the compositional consistency and the united style, the volumetric incorporation of the project as a reactive mechanism, and the change of the vision from retrospective towards prospective that the new conservation suggests. The recurrence to the history of architecture and its recognisable forms, as a way of perpetuating memory and establishing a reference, dissolved any instinct of compositive unity and style, provoking permanent relationships to tend to disappear. The composition and coherence lead to suppose a type of discontinuity of isolated objects in which only possible relationships could appear; no longer as an order of certain formal and compositive rules, but as a special way of setting elements in a specific work. The new globalised field required new forms of consistency between the project and the pre-existent subject, motivated amongst others by the higher pace of market evolution, increase of consumer tax and the level of information and competence between different locations; aspects which finally made stylistic consistence inefficient. In this context of disintegration, the project, in incorporation as well as added to a constructed building, stops being considered as a volumetric appendix subordinate to compositive and formal rules of old, to be considered as an organism of reactive order, that causes a change in the structural and systematic configuration of the existing foundation. The extension, previsouly spatial, is now considered a sensorial and morphological extension, with the implementation of technology and hyper-information, but at the same time, marked by a strong tendency of energetic optimization in its operational role, facing the emergence of the ecological factor in contemporary production. The technological world turns into a new nature, a nature that should be analysed from ecological terms; in other words, as an event of transition in the continuous redistribution of energy. In this area, effectiveness is not only determined by the capacity of adaptation to changing conditions, but also by its transforming capacity “expressly” in order to change an environment. In a society, like ours, that is modernising intensively, it is difficult to share an adecuate agreement with the forms of the past. From 1790, the date of the first French convention for the conservation of monuments, the scale of what is expexted to be preserved is more and more ambitious, so much so that nowadays the repertoire of that what is conserved includes practically all typologies of the constructed surroundings. For Koolhaas, the ínterval between the object and the moment when its conservation is decided has been reduced, from two 17 milennia in 1882 to a few decades nowadays. Shortly this lapse will disappear, showing a radical change of retrospective towards prospective, that is to say, that soon it will be necessary to decide what to conserve before constructing. The shapes of cities are the result of the continuous incorporation of architecture, and perhaps that only through architecture the response to the universe can be understood, the continuity of what has already been constructed. Our work is understood also within that system, modifying the field of action and leaving the road ready for the next movement of those that will follow after us. Continuity does not mean conservatism, continuity means being conscient of the transitory value of our answers to specific needs, accepting the change that we have received. That what has been constructed to remain and last, should cause future interventions to be integrated in it. It is necessary to accept continuity as a rule. Solá-Morales, in his time, distinguished between the relationship with new and old, between contrast and analogy. Today, almost three decades later, the objective consists of evaluating whether the model of architectural intervention in the constructed has been maintained since then or if new ways of positioning the project regarding the constructed have appeared. Our work claims to show the change of the approach of projects with pre-existing subjects and that this has got a close relation to the incorporation of new concepts, techniques, tools and necessities that impress the cultural context, caused by the change of centuries. This assumption guides us to establish a parallelism between the forms of connection where that what is new is manifested between a commonly assumed (topical), generic and orthodox position, based on that what is visual and expressive in the last decades of the XXth century, and an emerging (heterotopical), extraordinary and heterodox reality that stimulates the immaterial and that seems to emerge with growing intensity in the XXIst century. If throughout the XXth century the project of architectural intervention was considered from the continuity and discontinuity of formal categories, marked by the expression of the pre-existing building, the new contemporary intervention, as a reactive device in the landscape and territory, demands an absolute continuity. No longer a visual, expressive or functional one but a morphological continuity of adaptation and change with its own territorial dynamics, under new game rules and unfolding new operative (projective) strategies from its own logic and contingency. 18 The aim of this research is to determine new forms of continuity and the possible logic of production that are expressed in the Architectural Intervention, trying to overcome the obviousness of its physical and visual relationship, at the beginning of this new century, as a result of the incorporation of the operative factor that the new architectural device unfolds. We think it is correct to maintain the connotative path that marks the name architectural intervention by bringing previous concepts and theorical approaches that have been evolving through time together. If the name suffers from a wider operational range because of its formulation, a quality that our contemporary logic provokes, the reformulation and consolidation of an interventional concept could be more suitable for our times, giving preference to a logical method from its own necessity and contingency. It seems that now time shapes the topics, it is no longer about materialising a certain time but about expressing the changes that its new temporality generates. Finally, our initial approach aspires to form a new way of reflection that permits us to understand the complex implications that the new architecture submits the pre-existing subject to, motivated by the incorporation of factors external to simple formal and expressive judgement, prevailing at the end of the XXth century. In the same way, our set road, as an alternative, permits the contemplation of possible research paths, considering that what is pre-existing as an area that spans the whole territory with emerging changing dynamics and, with them, their interventional logics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los Centros de Datos se encuentran actualmente en cualquier sector de la economía mundial. Están compuestos por miles de servidores, dando servicio a los usuarios de forma global, las 24 horas del día y los 365 días del año. Durante los últimos años, las aplicaciones del ámbito de la e-Ciencia, como la e-Salud o las Ciudades Inteligentes han experimentado un desarrollo muy significativo. La necesidad de manejar de forma eficiente las necesidades de cómputo de aplicaciones de nueva generación, junto con la creciente demanda de recursos en aplicaciones tradicionales, han facilitado el rápido crecimiento y la proliferación de los Centros de Datos. El principal inconveniente de este aumento de capacidad ha sido el rápido y dramático incremento del consumo energético de estas infraestructuras. En 2010, la factura eléctrica de los Centros de Datos representaba el 1.3% del consumo eléctrico mundial. Sólo en el año 2012, el consumo de potencia de los Centros de Datos creció un 63%, alcanzando los 38GW. En 2013 se estimó un crecimiento de otro 17%, hasta llegar a los 43GW. Además, los Centros de Datos son responsables de más del 2% del total de emisiones de dióxido de carbono a la atmósfera. Esta tesis doctoral se enfrenta al problema energético proponiendo técnicas proactivas y reactivas conscientes de la temperatura y de la energía, que contribuyen a tener Centros de Datos más eficientes. Este trabajo desarrolla modelos de energía y utiliza el conocimiento sobre la demanda energética de la carga de trabajo a ejecutar y de los recursos de computación y refrigeración del Centro de Datos para optimizar el consumo. Además, los Centros de Datos son considerados como un elemento crucial dentro del marco de la aplicación ejecutada, optimizando no sólo el consumo del Centro de Datos sino el consumo energético global de la aplicación. Los principales componentes del consumo en los Centros de Datos son la potencia de computación utilizada por los equipos de IT, y la refrigeración necesaria para mantener los servidores dentro de un rango de temperatura de trabajo que asegure su correcto funcionamiento. Debido a la relación cúbica entre la velocidad de los ventiladores y el consumo de los mismos, las soluciones basadas en el sobre-aprovisionamiento de aire frío al servidor generalmente tienen como resultado ineficiencias energéticas. Por otro lado, temperaturas más elevadas en el procesador llevan a un consumo de fugas mayor, debido a la relación exponencial del consumo de fugas con la temperatura. Además, las características de la carga de trabajo y las políticas de asignación de recursos tienen un impacto importante en los balances entre corriente de fugas y consumo de refrigeración. La primera gran contribución de este trabajo es el desarrollo de modelos de potencia y temperatura que permiten describes estos balances entre corriente de fugas y refrigeración; así como la propuesta de estrategias para minimizar el consumo del servidor por medio de la asignación conjunta de refrigeración y carga desde una perspectiva multivariable. Cuando escalamos a nivel del Centro de Datos, observamos un comportamiento similar en términos del balance entre corrientes de fugas y refrigeración. Conforme aumenta la temperatura de la sala, mejora la eficiencia de la refrigeración. Sin embargo, este incremente de la temperatura de sala provoca un aumento en la temperatura de la CPU y, por tanto, también del consumo de fugas. Además, la dinámica de la sala tiene un comportamiento muy desigual, no equilibrado, debido a la asignación de carga y a la heterogeneidad en el equipamiento de IT. La segunda contribución de esta tesis es la propuesta de técnicas de asigación conscientes de la temperatura y heterogeneidad que permiten optimizar conjuntamente la asignación de tareas y refrigeración a los servidores. Estas estrategias necesitan estar respaldadas por modelos flexibles, que puedan trabajar en tiempo real, para describir el sistema desde un nivel de abstracción alto. Dentro del ámbito de las aplicaciones de nueva generación, las decisiones tomadas en el nivel de aplicación pueden tener un impacto dramático en el consumo energético de niveles de abstracción menores, como por ejemplo, en el Centro de Datos. Es importante considerar las relaciones entre todos los agentes computacionales implicados en el problema, de forma que puedan cooperar para conseguir el objetivo común de reducir el coste energético global del sistema. La tercera contribución de esta tesis es el desarrollo de optimizaciones energéticas para la aplicación global por medio de la evaluación de los costes de ejecutar parte del procesado necesario en otros niveles de abstracción, que van desde los nodos hasta el Centro de Datos, por medio de técnicas de balanceo de carga. Como resumen, el trabajo presentado en esta tesis lleva a cabo contribuciones en el modelado y optimización consciente del consumo por fugas y la refrigeración de servidores; el modelado de los Centros de Datos y el desarrollo de políticas de asignación conscientes de la heterogeneidad; y desarrolla mecanismos para la optimización energética de aplicaciones de nueva generación desde varios niveles de abstracción. ABSTRACT Data centers are easily found in every sector of the worldwide economy. They consist of tens of thousands of servers, serving millions of users globally and 24-7. In the last years, e-Science applications such e-Health or Smart Cities have experienced a significant development. The need to deal efficiently with the computational needs of next-generation applications together with the increasing demand for higher resources in traditional applications has facilitated the rapid proliferation and growing of data centers. A drawback to this capacity growth has been the rapid increase of the energy consumption of these facilities. In 2010, data center electricity represented 1.3% of all the electricity use in the world. In year 2012 alone, global data center power demand grew 63% to 38GW. A further rise of 17% to 43GW was estimated in 2013. Moreover, data centers are responsible for more than 2% of total carbon dioxide emissions. This PhD Thesis addresses the energy challenge by proposing proactive and reactive thermal and energy-aware optimization techniques that contribute to place data centers on a more scalable curve. This work develops energy models and uses the knowledge about the energy demand of the workload to be executed and the computational and cooling resources available at data center to optimize energy consumption. Moreover, data centers are considered as a crucial element within their application framework, optimizing not only the energy consumption of the facility, but the global energy consumption of the application. The main contributors to the energy consumption in a data center are the computing power drawn by IT equipment and the cooling power needed to keep the servers within a certain temperature range that ensures safe operation. Because of the cubic relation of fan power with fan speed, solutions based on over-provisioning cold air into the server usually lead to inefficiencies. On the other hand, higher chip temperatures lead to higher leakage power because of the exponential dependence of leakage on temperature. Moreover, workload characteristics as well as allocation policies also have an important impact on the leakage-cooling tradeoffs. The first key contribution of this work is the development of power and temperature models that accurately describe the leakage-cooling tradeoffs at the server level, and the proposal of strategies to minimize server energy via joint cooling and workload management from a multivariate perspective. When scaling to the data center level, a similar behavior in terms of leakage-temperature tradeoffs can be observed. As room temperature raises, the efficiency of data room cooling units improves. However, as we increase room temperature, CPU temperature raises and so does leakage power. Moreover, the thermal dynamics of a data room exhibit unbalanced patterns due to both the workload allocation and the heterogeneity of computing equipment. The second main contribution is the proposal of thermal- and heterogeneity-aware workload management techniques that jointly optimize the allocation of computation and cooling to servers. These strategies need to be backed up by flexible room level models, able to work on runtime, that describe the system from a high level perspective. Within the framework of next-generation applications, decisions taken at this scope can have a dramatical impact on the energy consumption of lower abstraction levels, i.e. the data center facility. It is important to consider the relationships between all the computational agents involved in the problem, so that they can cooperate to achieve the common goal of reducing energy in the overall system. The third main contribution is the energy optimization of the overall application by evaluating the energy costs of performing part of the processing in any of the different abstraction layers, from the node to the data center, via workload management and off-loading techniques. In summary, the work presented in this PhD Thesis, makes contributions on leakage and cooling aware server modeling and optimization, data center thermal modeling and heterogeneityaware data center resource allocation, and develops mechanisms for the energy optimization for next-generation applications from a multi-layer perspective.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Array measurements have become a valuable tool for site response characterization in a non-invasive way. The array design, i.e. size, geometry and number of stations, has a great influence in the quality of the obtained results. From the previous parameters, the number of available stations uses to be the main limitation for the field experiments, because of the economical and logistical constraints that it involves. Sometimes, from the initially planned array layout, carefully designed before the fieldwork campaign, one or more stations do not work properly, modifying the prearranged geometry. Whereas other times, there is not possible to set up the desired array layout, because of the lack of stations. Therefore, for a planned array layout, the number of operative stations and their arrangement in the array become a crucial point in the acquisition stage and subsequently in the dispersion curve estimation. In this paper we carry out an experimental work to analyze which is the minimum number of stations that would provide reliable dispersion curves for three prearranged array configurations (triangular, circular with central station and polygonal geometries). For the optimization study, we analyze together the theoretical array responses and the experimental dispersion curves obtained through the f-k method. In the case of the f-k method, we compare the dispersion curves obtained for the original or prearranged arrays with the ones obtained for the modified arrays, i.e. the dispersion curves obtained when a certain number of stations n is removed, each time, from the original layout of X geophones. The comparison is evaluated by means of a misfit function, which helps us to determine how constrained are the studied geometries by stations removing and which station or combination of stations affect more to the array capability when they are not available. All this information might be crucial to improve future array designs, determining when it is possible to optimize the number of arranged stations without losing the reliability of the obtained results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Studies suggest that expert performance in sport is the result of long-term engagement in a highly specialized form of training termed deliberate practice. The relationship between accumulated deliberate practice and performance predicts that those who begin deliberate practice at a young age accumulate more practice hours over time and would, therefore, have a significant performance advantage. However, qualitative studies have shown that a large amount of sport-specific practice at a young age may lead to negative consequences, such as dropout, and is not necessarily the only path to expert performance in sport. Studies have yet to investigate the activity context, such as the amount of early sport participation, deliberate play and deliberate practice within which dropout occurs. Purpose: To determine whether the nature and amount of childhood-organized sport, deliberate play and deliberate practice participation influence athletes' subsequent decisions to drop out or invest in organized sport. It was hypothesized that young athletes who drop out will have sampled fewer sports, spent less time in deliberate play activities and spent more time in deliberate practice activities during childhood sport involvement. Participants: The parents of eight current, high-level, male, minor ice hockey players formed an active group. The parents of four high-level, male, minor ice hockey players who had recently withdrawn from competitive hockey formed a dropout group. Data collection: Parents completed a structured retrospective survey designed to assess their sons' involvement in organized sport, deliberate play and deliberate practice activities from ages 6 to 13. Data analysis: A complete data-set was available for ages 6 through 13, resulting in a longitudinal data-set spanning eight years. This eight-year range was divided into three levels of development corresponding to the players' progress through the youth ice hockey system. Level one encompassed ages 6–9, level two included ages 10–11 and level three covered ages 12–13. Descriptive statistics were used to report the ages at which the active and dropout players first engaged in select hockey activities. ANOVA with repeated measures across the three levels of development was used to compare the number of sports the active and dropout players were involved in outside of hockey, the number of hours spent in these sports, and involvement in various hockey-related activities. Findings: Results indicated that both the active and dropout players enjoyed a diverse and playful introduction to sport. Furthermore, the active and dropout players invested similar amounts of time in organized hockey games, organized hockey practices, specialized hockey training activities (e.g. hockey camps) and hockey play. However, analysis revealed that the dropout players began off-ice training at a younger age and invested significantly more hours/year in off-ice training at ages 12–13, indicating that engaging in off-ice training activities at a younger age may have negative implications for long-term ice hockey participation. Conclusion: These results are consistent with previous research that has found that early diversification does not hinder sport-specific skill development and it may, in fact, be preferable to early specialization. The active and dropout players differed in one important aspect of deliberate practice: off-ice training activities. The dropout players began off-ice training at a younger age, and participated in more off-ice training at ages 12 and 13 than their active counterparts. This indicates a form of early specialization and supports the postulate that early involvement in practice activities that are not enjoyable may ultimately undermine the intrinsic motivation to continue in sport. Youth sport programs should not focus on developing athletic fitness through intense and routine training, but rather on sport-specific practice, games and play activities that foster fun and enjoyment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As a new medium for questionnaire delivery, the internet has the potential to revolutionise the survey process. Online (web-based) questionnaires provide several advantages over traditional survey methods in terms of cost, speed, appearance, flexibility, functionality, and usability [1, 2]. For instance, delivery is faster, responses are received more quickly, and data collection can be automated or accelerated [1- 3]. Online-questionnaires can also provide many capabilities not found in traditional paper-based questionnaires: they can include pop-up instructions and error messages; they can incorporate links; and it is possible to encode difficult skip patterns making such patterns virtually invisible to respondents. Like many new technologies, however, online-questionnaires face criticism despite their advantages. Typically, such criticisms focus on the vulnerability of online-questionnaires to the four standard survey error types: namely, coverage, non-response, sampling, and measurement errors. Although, like all survey errors, coverage error (“the result of not allowing all members of the survey population to have an equal or nonzero chance of being sampled for participation in a survey” [2, pg. 9]) also affects traditional survey methods, it is currently exacerbated in online-questionnaires as a result of the digital divide. That said, many developed countries have reported substantial increases in computer and internet access and/or are targeting this as part of their immediate infrastructural development [4, 5]. Indicating that familiarity with information technologies is increasing, these trends suggest that coverage error will rapidly diminish to an acceptable level (for the developed world at least) in the near future, and in so doing, positively reinforce the advantages of online-questionnaire delivery. The second error type – the non-response error – occurs when individuals fail to respond to the invitation to participate in a survey or abandon a questionnaire before it is completed. Given today’s societal trend towards self-administration [2] the former is inevitable, irrespective of delivery mechanism. Conversely, non-response as a consequence of questionnaire abandonment can be relatively easily addressed. Unlike traditional questionnaires, the delivery mechanism for online-questionnaires makes estimation of questionnaire length and time required for completion difficult1, thus increasing the likelihood of abandonment. By incorporating a range of features into the design of an online questionnaire, it is possible to facilitate such estimation – and indeed, to provide respondents with context sensitive assistance during the response process – and thereby reduce abandonment while eliciting feelings of accomplishment [6]. For online-questionnaires, sampling error (“the result of attempting to survey only some, and not all, of the units in the survey population” [2, pg. 9]) can arise when all but a small portion of the anticipated respondent set is alienated (and so fails to respond) as a result of, for example, disregard for varying connection speeds, bandwidth limitations, browser configurations, monitors, hardware, and user requirements during the questionnaire design process. Similarly, measurement errors (“the result of poor question wording or questions being presented in such a way that inaccurate or uninterpretable answers are obtained” [2, pg. 11]) will lead to respondents becoming confused and frustrated. Sampling, measurement, and non-response errors are likely to occur when an online-questionnaire is poorly designed. Individuals will answer questions incorrectly, abandon questionnaires, and may ultimately refuse to participate in future surveys; thus, the benefit of online questionnaire delivery will not be fully realized. To prevent errors of this kind2, and their consequences, it is extremely important that practical, comprehensive guidelines exist for the design of online questionnaires. Many design guidelines exist for paper-based questionnaire design (e.g. [7-14]); the same is not true for the design of online questionnaires [2, 15, 16]. The research presented in this paper is a first attempt to address this discrepancy. Section 2 describes the derivation of a comprehensive set of guidelines for the design of online-questionnaires and briefly (given space restrictions) outlines the essence of the guidelines themselves. Although online-questionnaires reduce traditional delivery costs (e.g. paper, mail out, and data entry), set up costs can be high given the need to either adopt and acquire training in questionnaire development software or secure the services of a web developer. Neither approach, however, guarantees a good questionnaire (often because the person designing the questionnaire lacks relevant knowledge in questionnaire design). Drawing on existing software evaluation techniques [17, 18], we assessed the extent to which current questionnaire development applications support our guidelines; Section 3 describes the framework used for the evaluation, and Section 4 discusses our findings. Finally, Section 5 concludes with a discussion of further work.