907 resultados para Sub-tropical Design


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The flow rates of drying and nebulizing gas, heat block and desolvation line temperatures and interface voltage are potential electrospray ionization parameters as they may enhance sensitivity of the mass spectrometer. The conditions that give higher sensitivity of 13 pharmaceuticals were explored. First, Plackett-Burman design was implemented to screen significant factors, and it was concluded that interface voltage and nebulizing gas flow were the only factors that influence the intensity signal for all pharmaceuticals. This fractionated factorial design was projected to set a full 2(2) factorial design with center points. The lack-of-fit test proved to be significant. Then, a central composite face-centered design was conducted. Finally, a stepwise multiple linear regression and subsequently an optimization problem solving were carried out. Two main drug clusters were found concerning the signal intensities of all runs of the augmented factorial design. p-Aminophenol, salicylic acid, and nimesulide constitute one cluster as a result of showing much higher sensitivity than the remaining drugs. The other cluster is more homogeneous with some sub-clusters comprising one pharmaceutical and its respective metabolite. It was observed that instrumental signal increased when both significant factors increased with maximum signal occurring when both codified factors are set at level +1. It was also found that, for most of the pharmaceuticals, interface voltage influences the intensity of the instrument more than the nebulizing gas flowrate. The only exceptions refer to nimesulide where the relative importance of the factors is reversed and still salicylic acid where both factors equally influence the instrumental signal. Graphical Abstract ᅟ.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance, energy efficiency and cost improvements due to traditional technology scaling have begun to slow down and present diminishing returns. Underlying reasons for this trend include fundamental physical limits of transistor scaling, the growing significance of quantum effects as transistors shrink, and a growing mismatch between transistors and interconnects regarding size, speed and power. Continued Moore's Law scaling will not come from technology scaling alone, and must involve improvements to design tools and development of new disruptive technologies such as 3D integration. 3D integration presents potential improvements to interconnect power and delay by translating the routing problem into a third dimension, and facilitates transistor density scaling independent of technology node. Furthermore, 3D IC technology opens up a new architectural design space of heterogeneously-integrated high-bandwidth CPUs. Vertical integration promises to provide the CPU architectures of the future by integrating high performance processors with on-chip high-bandwidth memory systems and highly connected network-on-chip structures. Such techniques can overcome the well-known CPU performance bottlenecks referred to as memory and communication wall. However the promising improvements to performance and energy efficiency offered by 3D CPUs does not come without cost, both in the financial investments to develop the technology, and the increased complexity of design. Two main limitations to 3D IC technology have been heat removal and TSV reliability. Transistor stacking creates increases in power density, current density and thermal resistance in air cooled packages. Furthermore the technology introduces vertical through silicon vias (TSVs) that create new points of failure in the chip and require development of new BEOL technologies. Although these issues can be controlled to some extent using thermal-reliability aware physical and architectural 3D design techniques, high performance embedded cooling schemes, such as micro-fluidic (MF) cooling, are fundamentally necessary to unlock the true potential of 3D ICs. A new paradigm is being put forth which integrates the computational, electrical, physical, thermal and reliability views of a system. The unification of these diverse aspects of integrated circuits is called Co-Design. Independent design and optimization of each aspect leads to sub-optimal designs due to a lack of understanding of cross-domain interactions and their impacts on the feasibility region of the architectural design space. Co-Design enables optimization across layers with a multi-domain view and thus unlocks new high-performance and energy efficient configurations. Although the co-design paradigm is becoming increasingly necessary in all fields of IC design, it is even more critical in 3D ICs where, as we show, the inter-layer coupling and higher degree of connectivity between components exacerbates the interdependence between architectural parameters, physical design parameters and the multitude of metrics of interest to the designer (i.e. power, performance, temperature and reliability). In this dissertation we present a framework for multi-domain co-simulation and co-optimization of 3D CPU architectures with both air and MF cooling solutions. Finally we propose an approach for design space exploration and modeling within the new Co-Design paradigm, and discuss the possible avenues for improvement of this work in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the semiconductor industry struggles to maintain its momentum down the path following the Moore's Law, three dimensional integrated circuit (3D IC) technology has emerged as a promising solution to achieve higher integration density, better performance, and lower power consumption. However, despite its significant improvement in electrical performance, 3D IC presents several serious physical design challenges. In this dissertation, we investigate physical design methodologies for 3D ICs with primary focus on two areas: low power 3D clock tree design, and reliability degradation modeling and management. Clock trees are essential parts for digital system which dissipate a large amount of power due to high capacitive loads. The majority of existing 3D clock tree designs focus on minimizing the total wire length, which produces sub-optimal results for power optimization. In this dissertation, we formulate a 3D clock tree design flow which directly optimizes for clock power. Besides, we also investigate the design methodology for clock gating a 3D clock tree, which uses shutdown gates to selectively turn off unnecessary clock activities. Different from the common assumption in 2D ICs that shutdown gates are cheap thus can be applied at every clock node, shutdown gates in 3D ICs introduce additional control TSVs, which compete with clock TSVs for placement resources. We explore the design methodologies to produce the optimal allocation and placement for clock and control TSVs so that the clock power is minimized. We show that the proposed synthesis flow saves significant clock power while accounting for available TSV placement area. Vertical integration also brings new reliability challenges including TSV's electromigration (EM) and several other reliability loss mechanisms caused by TSV-induced stress. These reliability loss models involve complex inter-dependencies between electrical and thermal conditions, which have not been investigated in the past. In this dissertation we set up an electrical/thermal/reliability co-simulation framework to capture the transient of reliability loss in 3D ICs. We further derive and validate an analytical reliability objective function that can be integrated into the 3D placement design flow. The reliability aware placement scheme enables co-design and co-optimization of both the electrical and reliability property, thus improves both the circuit's performance and its lifetime. Our electrical/reliability co-design scheme avoids unnecessary design cycles or application of ad-hoc fixes that lead to sub-optimal performance. Vertical integration also enables stacking DRAM on top of CPU, providing high bandwidth and short latency. However, non-uniform voltage fluctuation and local thermal hotspot in CPU layers are coupled into DRAM layers, causing a non-uniform bit-cell leakage (thereby bit flip) distribution. We propose a performance-power-resilience simulation framework to capture DRAM soft error in 3D multi-core CPU systems. In addition, a dynamic resilience management (DRM) scheme is investigated, which adaptively tunes CPU's operating points to adjust DRAM's voltage noise and thermal condition during runtime. The DRM uses dynamic frequency scaling to achieve a resilience borrow-in strategy, which effectively enhances DRAM's resilience without sacrificing performance. The proposed physical design methodologies should act as important building blocks for 3D ICs and push 3D ICs toward mainstream acceptance in the near future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increased prevalence of iron deficiency among infants can be attributed to the consumption of an iron deficient diet or a diet that interferes with iron absorption at the critical time of infancy, among other factors. The gradual shift from breast milk to other foods and liquids is a transition period which greatly contributes to iron deficiency anaemia (IDA). The purpose of this research was to assess iron deficiency anaemia among infants aged six to nine months in Keiyo South Sub County. The specific objectives of this study were: to establish the prevalence of infants with iron deficiency anaemia and dietary iron intake among infants aged 6 to 9 months. The cross sectional study design was adopted in this survey. This study was conducted in three health facilities in Keiyo South Sub County. The infants were selected by use of a two stage cluster sampling procedure. Systematic random sampling was then used to select a total of 244 mothers and their infants. Eighty two (82) infants were selected from Kamwosor sub-district hospital and eighty one (81) from both Nyaru and Chepkorio health facilities. Interview schedules, 24-hour dietary recall and food frequency questionnaires were used for collection of dietary iron intake. Biochemical tests were carried out by use of the Hemo-control photochrometer at the health facilities. Infants whose hemoglobin levels were less than 11g/dl were considered anaemic. Further, peripheral blood smears were conducted to ascertain the type of nutritional anaemia. Data was analyzed using the Statistical Package for Social Sciences (SPSS) computer software version 17, 2009. Dietary iron intake was analyzed using the NutriSurvey 2007 computer software. Results indicated that the mean hemoglobin values were 11.3± 0.84 g/dl. Twenty one percent (21.7%) of the infants had anaemia and further 100% of peripheral blood smears indicated iron deficiency anaemia. Dietary iron intake was a predictor of iron deficiency anaemia in this study (t=-3.138; p=0.01). Iron deficiency anaemia was evident among infants in Keiyo South Sub County. The Ministry of Health should formulate and implement policies on screening for anaemia and ensure intensive nutrition education on iron rich diets during child welfare clinics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biogeochemical-Argo is the extension of the Argo array of profiling floats to include floats that are equipped with biogeochemical sensors for pH, oxygen, nitrate, chlorophyll, suspended particles, and downwelling irradiance. Argo is a highly regarded, international program that measures the changing ocean temperature (heat content) and salinity with profiling floats distributed throughout the ocean. Newly developed sensors now allow profiling floats to also observe biogeochemical properties with sufficient accuracy for climate studies. This extension of Argo will enable an observing system that can determine the seasonal to decadal-scale variability in biological productivity, the supply of essential plant nutrients from deep-waters to the sunlit surface layer, ocean acidification, hypoxia, and ocean uptake of CO2. Biogeochemical-Argo will drive a transformative shift in our ability to observe and predict the effects of climate change on ocean metabolism, carbon uptake, and living marine resource management. Presently, vast areas of the open ocean are sampled only once per decade or less, with sampling occurring mainly in summer. Our ability to detect changes in biogeochemical processes that may occur due to the warming and acidification driven by increasing atmospheric CO2, as well as by natural climate variability, is greatly hindered by this undersampling. In close synergy with satellite systems (which are effective at detecting global patterns for a few biogeochemical parameters, but only very close to the sea surface and in the absence of clouds), a global array of biogeochemical sensors would revolutionize our understanding of ocean carbon uptake, productivity, and deoxygenation. The array would reveal the biological, chemical, and physical events that control these processes. Such a system would enable a new generation of global ocean prediction systems in support of carbon cycling, acidification, hypoxia and harmful algal blooms studies, as well as the management of living marine resources. In order to prepare for a global Biogeochemical-Argo array, several prototype profiling float arrays have been developed at the regional scale by various countries and are now operating. Examples include regional arrays in the Southern Ocean (SOCCOM ), the North Atlantic Sub-polar Gyre (remOcean ), the Mediterranean Sea (NAOS ), the Kuroshio region of the North Pacific (INBOX ), and the Indian Ocean (IOBioArgo ). For example, the SOCCOM program is deploying 200 profiling floats with biogeochemical sensors throughout the Southern Ocean, including areas covered seasonally with ice. The resulting data, which are publically available in real time, are being linked with computer models to better understand the role of the Southern Ocean in influencing CO2 uptake, biological productivity, and nutrient supply to distant regions of the world ocean. The success of these regional projects has motivated a planning meeting to discuss the requirements for and applications of a global-scale Biogeochemical-Argo program. The meeting was held 11-13 January 2016 in Villefranche-sur-Mer, France with attendees from eight nations now deploying Argo floats with biogeochemical sensors present to discuss this topic. In preparation, computer simulations and a variety of analyses were conducted to assess the resources required for the transition to a global-scale array. Based on these analyses and simulations, it was concluded that an array of about 1000 biogeochemical profiling floats would provide the needed resolution to greatly improve our understanding of biogeochemical processes and to enable significant improvement in ecosystem models. With an endurance of four years for a Biogeochemical-Argo float, this system would require the procurement and deployment of 250 new floats per year to maintain a 1000 float array. The lifetime cost for a Biogeochemical-Argo float, including capital expense, calibration, data management, and data transmission, is about $100,000. A global Biogeochemical-Argo system would thus cost about $25,000,000 annually. In the present Argo paradigm, the US provides half of the profiling floats in the array, while the EU, Austral/Asia, and Canada share most the remaining half. If this approach is adopted, the US cost for the Biogeochemical-Argo system would be ~$12,500,000 annually and ~$6,250,000 each for the EU, and Austral/Asia and Canada. This includes no direct costs for ship time and presumes that float deployments can be carried out from future research cruises of opportunity, including, for example, the international GO-SHIP program (http://www.go-ship.org). The full-scale implementation of a global Biogeochemical-Argo system with 1000 floats is feasible within a decade. The successful, ongoing pilot projects have provided the foundation and start for such a system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação de Mestrado para obtenção do grau de Mestre em Design de Produto, apresentada na Universidade de Lisboa - Faculdade de Arquitectura.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This master thesis introduces assessment procedures of daylighting performance in office rooms with shaded opening, recommendations for Natal-RN (Latitude 05,47' S, Longitude 35,11' W). The studies assume the need of window exterior shading in hot and humid climate buildings. The daylighting performance analyses are based on simulated results for three levels of illuminance (300,500 e 1000 lux) between 08h00 e 16h00, in rooms with 2,80 m height, 6 m large and 4 m, 6 m e 8 m depths, with a centered single opening, window wall ratio (20%, 40% e 60%), four orientations (North, East, South and West), and two types of sky (clear and partially cloudy). The sky characteristics were statistically determined based on hourly data from INPE-CRN solar and daylighting weather station. The lighting performance is resulted from dynamic computer simulation of 72 models using Troplux 3.12. The simulation results were assessed using a new parameter to quantify the use of interior daylighting, the useful percentage of daylight (PULN), which corresponds to the time fraction with satisfactory light, in accordance with the illuminance design. The passive zone depths are defined based on the PULN. Despite the failures of illuminance data from the weather station, the analyses ratified the high potential of daylighting for shaded rooms. The most influential variables on the lighting performance are the opening size and the illuminance of design, while the orientation is a little influential

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Open Access article is distributed under the terms of the Creative Commons Attribution Noncommercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To develop and optimise some variables that influence fluoxetine orally disintegrating tablets (ODTs) formulation. Methods: Fluoxetine ODTs tablets were prepared using direct compression method. Three-factor, 3- level Box-Behnken design was used to optimize and develop fluoxetine ODT formulation. The design suggested 15 formulations of different lubricant concentration (X1), lubricant mixing time (X2), and compression force (X3) and then their effect was monitored on tablet weight (Y1), thickness (Y2), hardness (Y3), % friability (Y4), and disintegration time (Y5). Results: All powder blends showed acceptable flow properties, ranging from good to excellent. The disintegration time (Y5) was affected directly by lubricant concentration (X1). Lubricant mixing time (X2) had a direct effect on tablet thickness (Y2) and hardness (Y3), while compression force (X3) had a direct impact on tablet hardness (Y3), % friability (Y4) and disintegration time (Y5). Accordingly, Box-Behnken design suggested an optimized formula of 0.86 mg (X1), 15.3 min (X2), and 10.6 KN (X3). Finally, the prediction error percentage responses of Y1, Y2, Y3, Y4, and Y5 were 0.31, 0.52, 2.13, 3.92 and 3.75 %, respectively. Formula 4 and 8 achieved 90 % of drug release within the first 5 min of dissolution test. Conclusion: Fluoxetine ODT formulation has been developed and optimized successfully using Box- Behnken design and has also been manufactured efficiently using direct compression technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To design and develop a new series of histone deacetylase inhibitors (FP1 - FP12) and evaluate their inhibitory activity against hydroxyacetamide (HDAC) enzyme mixture-derived HeLa cervical carcinoma cell and MCF-7. Methods: The designed molecules (FP1 - FP12) were docked using AUTODOCK 1.4.6. FP3 and FP8 showed higher interaction comparable to the prototypical HDACI. The designed series of 2-[[(3- Phenyl/substituted Phenyl-[4-{(4-(substituted phenyl)ethylidine-2-Phenyl-1,3-Imidazol-5-One}](-4H- 1,2,4-triazol-5-yl)sulfanyl]-N-hydroxyacetamide derivatives (FP1-FP12) was synthesized by merging 2- [(4-amino-3-phenyl-4H- 1, 2, 4-triazol-5-yl) sulfanyl]-N-hydroxyacetamide and 2-{[4-amino-3-(2- hydroxyphenyl)-4H-1,2, 4-triazol-5-yl]sulfanyl}-N hydroxyacetamide derivatives with aromatic substituted oxazolone. The biological activity of the synthesized molecule (FP1-FP12) was evaluated against HDAC enzyme mixture-derived HeLa cervical carcinoma cell and breast cancer cell line (MCF-7). Results: HDAC inhibitory activity of FP10 showed higher IC50 (half-maximal concentration inhibitory activity) of 0.09 μM, whereas standard SAHA molecule showed IC50 of 0.057 μM. On the other hand, FP9 exhibited higher GI50 (50 % of maximal concentration that inhibited cell proliferation) of 22.8 μM against MCF-7 cell line, compared with the standard, adriamycin, with GI50 of (-) 50.2 μM. Conclusion: Synthesis, spectral characterization, and evaluation of HDAC inhibition activity and in vitro anticancer evaluation of novel hydroxyacetamide derivatives against MCF-7 cell line have been achieved. The findings indicate the emergence of potentialanticancer compounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent developments in micro- and nanoscale 3D fabrication techniques have enabled the creation of materials with a controllable nanoarchitecture that can have structural features spanning 5 orders of magnitude from tens of nanometers to millimeters. These fabrication methods in conjunction with nanomaterial processing techniques permit a nearly unbounded design space through which new combinations of nanomaterials and architecture can be realized. In the course of this work, we designed, fabricated, and mechanically analyzed a wide range of nanoarchitected materials in the form of nanolattices made from polymer, composite, and hollow ceramic beams. Using a combination of two-photon lithography and atomic layer deposition, we fabricated samples with periodic and hierarchical architectures spanning densities over 4 orders of magnitude from ρ=0.3-300kg/m3 and with features as small as 5nm. Uniaxial compression and cyclic loading tests performed on different nanolattice topologies revealed a range of novel mechanical properties: the constituent nanoceramics used here have size-enhanced strengths that approach the theoretical limit of materials strength; hollow aluminum oxide (Al2O3) nanolattices exhibited ductile-like deformation and recovered nearly completely after compression to 50% strain when their wall thicknesses were reduced below 20nm due to the activation of shell buckling; hierarchical nanolattices exhibited enhanced recoverability and a near linear scaling of strength and stiffness with relative density, with E∝ρ1.04 and σy∝ρ1.17 for hollow Al2O3 samples; periodic rigid and non-rigid nanolattice topologies were tested and showed a nearly uniform scaling of strength and stiffness with relative density, marking a significant deviation from traditional theories on “bending” and “stretching” dominated cellular solids; and the mechanical behavior across all topologies was highly tunable and was observed to strongly correlate with the slenderness λ and the wall thickness-to-radius ratio t/a of the beams. These results demonstrate the potential of nanoarchitected materials to create new highly tunable mechanical metamaterials with previously unattainable properties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sub-wavelength structures are enabling the design of devices based in dielectric waveguides with unprecedented performance in both the near-infrared and mid-infrared wavelength regions. These devices include fiber-to-chip grating couplers with sub-decibel efficiency, waveguide couplers with bandwidths of several hundred nanometers, and low loss suspended waveguides. Here we will report our progress in the electromagnetic modelling and simulation of sub-wavelength structures, providing at the same time an intuitive vision of their fundamental optical properties. Furthermore, we will address design strategies for several integrated optical devices based on these structures, and present the latest experimental results for structures operating both at near and mid-infrared wavelengths.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes and investigates a metaheuristic tabu search algorithm (TSA) that generates optimal or near optimal solutions sequences for the feedback length minimization problem (FLMP) associated to a design structure matrix (DSM). The FLMP is a non-linear combinatorial optimization problem, belonging to the NP-hard class, and therefore finding an exact optimal solution is very hard and time consuming, especially on medium and large problem instances. First, we introduce the subject and provide a review of the related literature and problem definitions. Using the tabu search method (TSM) paradigm, this paper presents a new tabu search algorithm that generates optimal or sub-optimal solutions for the feedback length minimization problem, using two different neighborhoods based on swaps of two activities and shifting an activity to a different position. Furthermore, this paper includes numerical results for analyzing the performance of the proposed TSA and for fixing the proper values of its parameters. Then we compare our results on benchmarked problems with those already published in the literature. We conclude that the proposed tabu search algorithm is very promising because it outperforms the existing methods, and because no other tabu search method for the FLMP is reported in the literature. The proposed tabu search algorithm applied to the process layer of the multidimensional design structure matrices proves to be a key optimization method for an optimal product development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes and investigates a metaheuristic tabu search algorithm (TSA) that generates optimal or near optimal solutions sequences for the feedback length minimization problem (FLMP) associated to a design structure matrix (DSM). The FLMP is a non-linear combinatorial optimization problem, belonging to the NP-hard class, and therefore finding an exact optimal solution is very hard and time consuming, especially on medium and large problem instances. First, we introduce the subject and provide a review of the related literature and problem definitions. Using the tabu search method (TSM) paradigm, this paper presents a new tabu search algorithm that generates optimal or sub-optimal solutions for the feedback length minimization problem, using two different neighborhoods based on swaps of two activities and shifting an activity to a different position. Furthermore, this paper includes numerical results for analyzing the performance of the proposed TSA and for fixing the proper values of its parameters. Then we compare our results on benchmarked problems with those already published in the literature. We conclude that the proposed tabu search algorithm is very promising because it outperforms the existing methods, and because no other tabu search method for the FLMP is reported in the literature. The proposed tabu search algorithm applied to the process layer of the multidimensional design structure matrices proves to be a key optimization method for an optimal product development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neonatal seizures are common in the neonatal intensive care unit. Clinicians treat these seizures with several anti-epileptic drugs (AEDs) to reduce seizures in a neonate. Current AEDs exhibit sub-optimal efficacy and several randomized control trials (RCT) of novel AEDs are planned. The aim of this study was to measure the influence of trial design on the required sample size of a RCT. We used seizure time courses from 41 term neonates with hypoxic ischaemic encephalopathy to build seizure treatment trial simulations. We used five outcome measures, three AED protocols, eight treatment delays from seizure onset (Td) and four levels of trial AED efficacy to simulate different RCTs. We performed power calculations for each RCT design and analysed the resultant sample size. We also assessed the rate of false positives, or placebo effect, in typical uncontrolled studies. We found that the false positive rate ranged from 5 to 85% of patients depending on RCT design. For controlled trials, the choice of outcome measure had the largest effect on sample size with median differences of 30.7 fold (IQR: 13.7–40.0) across a range of AED protocols, Td and trial AED efficacy (p<0.001). RCTs that compared the trial AED with positive controls required sample sizes with a median fold increase of 3.2 (IQR: 1.9–11.9; p<0.001). Delays in AED administration from seizure onset also increased the required sample size 2.1 fold (IQR: 1.7–2.9; p<0.001). Subgroup analysis showed that RCTs in neonates treated with hypothermia required a median fold increase in sample size of 2.6 (IQR: 2.4–3.0) compared to trials in normothermic neonates (p<0.001). These results show that RCT design has a profound influence on the required sample size. Trials that use a control group, appropriate outcome measure, and control for differences in Td between groups in analysis will be valid and minimise sample size.