982 resultados para Free Design
Resumo:
This article looks at the contemporary reinvention of the term Media Ecologies in the work of Matthew Fuller, arguing that its provenance is less form Postman's Media Ecology Association andmore form the work of Felix Guattari. It then presents an account of free radios in Italy and France in the 1970s and contemporary pirate radio as exemplary cases of media ecologies in Fuller's sense of the term
Resumo:
PURPOSE: Conventional staging methods are inadequate to identify patients with stage II colon cancer (CC) who are at high risk of recurrence after surgery with curative intent. ColDx is a gene expression, microarray-based assay shown to be independently prognostic for recurrence-free interval (RFI) and overall survival in CC. The objective of this study was to further validate ColDx using formalin-fixed, paraffin-embedded specimens collected as part of the Alliance phase III trial, C9581.
PATIENTS AND METHODS: C9581 evaluated edrecolomab versus observation in patients with stage II CC and reported no survival benefit. Under an initial case-cohort sampling design, a randomly selected subcohort (RS) comprised 514 patients from 901 eligible patients with available tissue. Forty-nine additional patients with recurrence events were included in the analysis. Final analysis comprised 393 patients: 360 RS (58 events) and 33 non-RS events. Risk status was determined for each patient by ColDx. The Self-Prentice method was used to test the association between the resulting ColDx risk score and RFI adjusting for standard prognostic variables.
RESULTS: Fifty-five percent of patients (216 of 393) were classified as high risk. After adjustment for prognostic variables that included mismatch repair (MMR) deficiency, ColDx high-risk patients exhibited significantly worse RFI (multivariable hazard ratio, 2.13; 95% CI, 1.3 to 3.5; P < .01). Age and MMR status were marginally significant. RFI at 5 years for patients classified as high risk was 82% (95% CI, 79% to 85%), compared with 91% (95% CI, 89% to 93%) for patients classified as low risk.
CONCLUSION: ColDx is associated with RFI in the C9581 subsample in the presence of other prognostic factors, including MMR deficiency. ColDx could be incorporated with the traditional clinical markers of risk to refine patient prognosis.
Resumo:
We report the discovery, tracking, and detection circumstances for 85 trans-Neptunian objects (TNOs) from the first 42 deg2 of the Outer Solar System Origins Survey. This ongoing r-band solar system survey uses the 0.9 deg2 field of view MegaPrime camera on the 3.6 m Canada–France–Hawaii Telescope. Our orbital elements for these TNOs are precise to a fractional semimajor axis uncertainty <0.1%. We achieve this precision in just two oppositions, as compared to the normal three to five oppositions, via a dense observing cadence and innovative astrometric technique. These discoveries are free of ephemeris bias, a first for large trans-Neptunian surveys. We also provide the necessary information to enable models of TNO orbital distributions to be tested against our TNO sample. We confirm the existence of a cold "kernel" of objects within the main cold classical Kuiper Belt and infer the existence of an extension of the "stirred" cold classical Kuiper Belt to at least several au beyond the 2:1 mean motion resonance with Neptune. We find that the population model of Petit et al. remains a plausible representation of the Kuiper Belt. The full survey, to be completed in 2017, will provide an exquisitely characterized sample of important resonant TNO populations, ideal for testing models of giant planet migration during the early history of the solar system.
Resumo:
In the marketplace, complimentary gifts can take the form of experiential elements (e.g., a meal) or material items (e.g., tangible objects such as a mug). We identify these free gifts as a meaningful service design choice that helps service providers innovate service. Specifically, we examine the circumstances under which experiential or material gifts are preferred and generate greater consumer satisfaction, enhancing the overall service experience. Across three experiments, we demonstrate that consumers are generally happier with experiential offerings, and they prefer (and are more satisfied with) experiential offerings on ordinary consumption occasions; experiential elements are believed to further enrich otherwise mundane experiences. However, this experiential advantage disappears for consumers on meaningful and special occasions because of a strong desire to obtain a memory cue that will help them recall the experience. Indeed, the preference for a material item holds only when the gift has the quality to serve as a salient memory marker, but not when it lacks this quality. This research provides insight for managers to take into account consumption occasions or type of consumers (e.g., special occasions, repeat customers) to effectively design service bundles with complimentary gifts and thus better manage overall service experience.
Resumo:
Partendo da un’analisi dei problemi che si incontrano nella fase di conceptual design, si presentano le diverse tecniche di modellazione tridimensionale, con particolare attenzione al metodo subdivision e agli algoritmi che lo governano (Chaikin, Doo – Sabin). Vengono poi proposti alcuni esempi applicativi della modellazione free form e skeleton, con una successiva comparazione, sugli stessi modelli, delle sequenze e operazioni necessarie con le tradizionali tecniche di modellazione parametrica. Si riporta un esempio dell’utilizzo del software IronCAD, il primo software a unire la modellazione parametrica e diretta. Si descrivono le limitazioni della modellazione parametrica e di quella history free nella fase concettuale di un progetto, per arrivare a definire le caratteristiche della hybrid modeling, nuovo approccio alla modellazione. Si presenta brevemente il prototipo, in fase di sviluppo, che tenta di applicare concretamente i concetti dell’hybrid modeling e che vuole essere la base di partenza per una nuova generazione di softwares CAD. Infine si presenta la possibilità di ottenere simulazioni real time su modelli che subiscono modifiche topologiche. La simulazione real time è permessa dalla ridefinizione in forma parametrica del problema lineare elastico che viene successivamente risolto mediante l’applicazione congiunta delle R – Functions e del metodo PGD. Seguono esempi di simulazione real time.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
A fully coupled non-linear effective stress response finite difference (FD) model is built to survey the counter-intuitive recent findings on the reliance of pore water pressure ratio on foundation contact pressure. Two alternative design scenarios for a benchmark problem are explored and contrasted in the light of construction emission rates using the EFFC-DFI methodology. A strain-hardening effective stress plasticity model is adopted to simulate the dynamic loading. A combination of input motions, contact pressure, initial vertical total pressure and distance to foundation centreline are employed, as model variables, to further investigate the control of permanent and variable actions on the residual pore pressure ratio. The model is verified against the Ghosh and Madabhushi high acceleration field test database. The outputs of this work is aimed to improve the current computer-aided seismic foundation design that relies on ground’s packing state and consistency. The results confirm that on seismic excitation of shallow foundations, the likelihood of effective stress loss is greater in deeper depths and across free field. For the benchmark problem, adopting a shallow foundation system instead of piled foundation benefitted in a 75% less emission rate, a marked proportion of which is owed to reduced materials and haulage carbon cost.
Resumo:
Isocyanates are included into a class with an extreme commercial importance because their use in the manufacture of polyurethanes. Polyurethanes are used in several applications such as adhesives, coatings, foams, thermoplastics resins, printing inks, foundry moulds and rubbers. Agglomerated cork stoppers are currently used for still wines, semi-sparkle and gaseous wines, beer and cider. Methylene diphenyl diisocyanate (MDI) is presently the isocyanate used in the production of polyurethane based adhesive in use due to its lowest toxicity comparing with toluene diisocyanate (TDI) previously employed. However, free monomeric TDI or MDI, depending on the based polyurethane, can migrate from agglomerated cork stoppers to beverages therefore it needs to be under control. The presence of these compounds are usually investigated by HPLC with Fluorescence or UV-Vis detector depending on the derivatising agent. Ultra Performance Liquid Chromatography with Diode Array Detector (UPLC-DAD) method is replacing HPLC. The objective of this study is to determine which method is better to analyze isocyanates from agglomerated cork stoppers, essentially TDI to quantify its monomer. A Design of Experiments (DOE) with three factors, column temperature, flow and solvent, at two levels was done. Eight experiments with three replications and two repetitions were developed. Through an ANOVA the significance of the factors was evaluated and the best level’s factors were selected. As the TDI has two isomers and in this method these two isomers were not always separated an ANOVA with results of resolution between peaks was performed. The Design of Experiments reveals to be a suitable statistical tool to determine the best conditions to quantified free isocyanates from agglomerated cork stoppers to real foodstuff. The best level’s factors to maximize area was column temperature at 30ºC, flow to 0,3 mL/min and solvent 0,1% Ammonium Acetate, to maximize resolution was the same except the solvent that was 0,01% Ammonium Acetate.
Resumo:
This dissertation presents the design of three high-performance successive-approximation-register (SAR) analog-to-digital converters (ADCs) using distinct digital background calibration techniques under the framework of a generalized code-domain linear equalizer. These digital calibration techniques effectively and efficiently remove the static mismatch errors in the analog-to-digital (A/D) conversion. They enable aggressive scaling of the capacitive digital-to-analog converter (DAC), which also serves as sampling capacitor, to the kT/C limit. As a result, outstanding conversion linearity, high signal-to-noise ratio (SNR), high conversion speed, robustness, superb energy efficiency, and minimal chip-area are accomplished simultaneously. The first design is a 12-bit 22.5/45-MS/s SAR ADC in 0.13-μm CMOS process. It employs a perturbation-based calibration based on the superposition property of linear systems to digitally correct the capacitor mismatch error in the weighted DAC. With 3.0-mW power dissipation at a 1.2-V power supply and a 22.5-MS/s sample rate, it achieves a 71.1-dB signal-to-noise-plus-distortion ratio (SNDR), and a 94.6-dB spurious free dynamic range (SFDR). At Nyquist frequency, the conversion figure of merit (FoM) is 50.8 fJ/conversion step, the best FoM up to date (2010) for 12-bit ADCs. The SAR ADC core occupies 0.06 mm2, while the estimated area the calibration circuits is 0.03 mm2. The second proposed digital calibration technique is a bit-wise-correlation-based digital calibration. It utilizes the statistical independence of an injected pseudo-random signal and the input signal to correct the DAC mismatch in SAR ADCs. This idea is experimentally verified in a 12-bit 37-MS/s SAR ADC fabricated in 65-nm CMOS implemented by Pingli Huang. This prototype chip achieves a 70.23-dB peak SNDR and an 81.02-dB peak SFDR, while occupying 0.12-mm2 silicon area and dissipating 9.14 mW from a 1.2-V supply with the synthesized digital calibration circuits included. The third work is an 8-bit, 600-MS/s, 10-way time-interleaved SAR ADC array fabricated in 0.13-μm CMOS process. This work employs an adaptive digital equalization approach to calibrate both intra-channel nonlinearities and inter-channel mismatch errors. The prototype chip achieves 47.4-dB SNDR, 63.6-dB SFDR, less than 0.30-LSB differential nonlinearity (DNL), and less than 0.23-LSB integral nonlinearity (INL). The ADC array occupies an active area of 1.35 mm2 and dissipates 30.3 mW, including synthesized digital calibration circuits and an on-chip dual-loop delay-locked loop (DLL) for clock generation and synchronization.
Resumo:
Chapter 1: Under the average common value function, we select almost uniquely the mechanism that gives the seller the largest portion of the true value in the worst situation among all the direct mechanisms that are feasible, ex-post implementable and individually rational. Chapter 2: Strategy-proof, budget balanced, anonymous, envy-free linear mechanisms assign p identical objects to n agents. The efficiency loss is the largest ratio of surplus loss to efficient surplus, over all profiles of non-negative valuations. The smallest efficiency loss is uniquely achieved by the following simple allocation rule: assigns one object to each of the p−1 agents with the highest valuation, a large probability to the agent with the pth highest valuation, and the remaining probability to the agent with the (p+1)th highest valuation. When “envy freeness” is replaced by the weaker condition “voluntary participation”, the optimal mechanism differs only when p is much less than n. Chapter 3: One group is to be selected among a set of agents. Agents have preferences over the size of the group if they are selected; and preferences over size as well as the “stand-outside” option are single-peaked. We take a mechanism design approach and search for group selection mechanisms that are efficient, strategy-proof and individually rational. Two classes of such mechanisms are presented. The proposing mechanism allows agents to either maintain or shrink the group size following a fixed priority, and is characterized by group strategy-proofness. The voting mechanism enlarges the group size in each voting round, and achieves at least half of the maximum group size compatible with individual rationality.
Resumo:
Local anesthetic agents cause temporary blockade of nerve impulses productiong insensitivity to painful stimuli in the area supplied by that nerve. Bupivacaine (BVC) is an amide-type local anesthetic widely used in surgery and obstetrics for sustained peripheral and central nerve blockade. in this study, we prepared and characterized nanosphere formulations containing BVC. To achieve these goals, BVC loaded poly(DL-lactide-co-glycolide) (PLGA) nanospheres (NS) were prepared by nanopreciptation and characterized with regard to size distribution, drug loading and cytotoxicity assays. The 2(3-1) factorial experimental design was used to study the influence of three different independent variables on nanoparticle drug loading. BVC was assayed by HPLC, the particle size and zeta potential were determined by dynamic light scattering. BVC was determined using a combined ultrafiltration-centrifugation technique. The results of optimized formulations showed a narrow size distribution with a polydispersivity of 0.05%, an average diameter of 236.7 +/- 2.6 nm and the zeta potential -2.93 +/- 1,10 mV. In toxicity studies with fibroblast 3T3 cells, BVC loaded-PLGA-NS increased cell viability, in comparison with the effect produced by free BVC. In this way, BVC-loaded PLGA-NS decreased BVC toxicity. The development of BVC formulations in carriers such as nanospheres could offer the possibility of controlling drug delivery in biological systems, prolonging the anesthetic effect and reducing toxicity.
Resumo:
Optimization of Carnobacterium divergens V41 growth and bacteriocin activity in a culture medium deprived of animal protein, needs for food bioprotection, was performed by using a statistical approach. In a screening experiment, twelve factors (pH, temperature, carbohydrates, NaCl, yeast extract, soy peptone, sodium acetate, ammonium citrate, magnesium sulphate, manganese sulphate, ascorbic acid and thiamine) were tested for their influence on the maximal growth and bacteriocin activity using a two-level incomplete factorial design with 192 experiments performed in microtiter plate wells. Based on results, a basic medium was developed and three variables (pH, temperature and carbohydrates concentration) were selected for a scale-up study in bioreactor. A 23 complete factorial design was performed, allowing the estimation of linear effects of factors and all the first order interactions. The best conditions for the cell production were obtained with a temperature of 15°C and a carbohydrates concentration of 20 g/l whatever the pH (in the range 6.5-8), and the best conditions for bacteriocin activity were obtained at 15°C and pH 6.5 whatever the carbohydrates concentration (in the range 2-20 g/l). The predicted final count of C. divergens V41 and the bacteriocin activity under the optimized conditions (15°C, pH 6.5, 20 g/l carbohydrates) were 2.4 x 1010 CFU/ml and 819200 AU/ml respectively. C. divergens V41 cells cultivated in the optimized conditions were able to grow in cold-smoked salmon and totally inhibited the growth of Listeria monocytogenes (< 50 CFU g-1) during five weeks of vacuum storage at 4° and 8°C.
Resumo:
International audience
Resumo:
In the presented thesis work, the meshfree method with distance fields was coupled with the lattice Boltzmann method to obtain solutions of fluid-structure interaction problems. The thesis work involved development and implementation of numerical algorithms, data structure, and software. Numerical and computational properties of the coupling algorithm combining the meshfree method with distance fields and the lattice Boltzmann method were investigated. Convergence and accuracy of the methodology was validated by analytical solutions. The research was focused on fluid-structure interaction solutions in complex, mesh-resistant domains as both the lattice Boltzmann method and the meshfree method with distance fields are particularly adept in these situations. Furthermore, the fluid solution provided by the lattice Boltzmann method is massively scalable, allowing extensive use of cutting edge parallel computing resources to accelerate this phase of the solution process. The meshfree method with distance fields allows for exact satisfaction of boundary conditions making it possible to exactly capture the effects of the fluid field on the solid structure.
Resumo:
Shearing is the process where sheet metal is mechanically cut between two tools. Various shearing technologies are commonly used in the sheet metal industry, for example, in cut to length lines, slitting lines, end cropping etc. Shearing has speed and cost advantages over competing cutting methods like laser and plasma cutting, but involves large forces on the equipment and large strains in the sheet material. The constant development of sheet metals toward higher strength and formability leads to increased forces on the shearing equipment and tools. Shearing of new sheet materials imply new suitable shearing parameters. Investigations of the shearing parameters through live tests in the production are expensive and separate experiments are time consuming and requires specialized equipment. Studies involving a large number of parameters and coupled effects are therefore preferably performed by finite element based simulations. Accurate experimental data is still a prerequisite to validate such simulations. There is, however, a shortage of accurate experimental data to validate such simulations. In industrial shearing processes, measured forces are always larger than the actual forces acting on the sheet, due to friction losses. Shearing also generates a force that attempts to separate the two tools with changed shearing conditions through increased clearance between the tools as result. Tool clearance is also the most common shearing parameter to adjust, depending on material grade and sheet thickness, to moderate the required force and to control the final sheared edge geometry. In this work, an experimental procedure that provides a stable tool clearance together with accurate measurements of tool forces and tool displacements, was designed, built and evaluated. Important shearing parameters and demands on the experimental set-up were identified in a sensitivity analysis performed with finite element simulations under the assumption of plane strain. With respect to large tool clearance stability and accurate force measurements, a symmetric experiment with two simultaneous shears and internal balancing of forces attempting to separate the tools was constructed. Steel sheets of different strength levels were sheared using the above mentioned experimental set-up, with various tool clearances, sheet clamping and rake angles. Results showed that tool penetration before fracture decreased with increased material strength. When one side of the sheet was left unclamped and free to move, the required shearing force decreased but instead the force attempting to separate the two tools increased. Further, the maximum shearing force decreased and the rollover increased with increased tool clearance. Digital image correlation was applied to measure strains on the sheet surface. The obtained strain fields, together with a material model, were used to compute the stress state in the sheet. A comparison, up to crack initiation, of these experimental results with corresponding results from finite element simulations in three dimensions and at a plane strain approximation showed that effective strains on the surface are representative also for the bulk material. A simple model was successfully applied to calculate the tool forces in shearing with angled tools from forces measured with parallel tools. These results suggest that, with respect to tool forces, a plane strain approximation is valid also at angled tools, at least for small rake angles. In general terms, this study provide a stable symmetric experimental set-up with internal balancing of lateral forces, for accurate measurements of tool forces, tool displacements, and sheet deformations, to study the effects of important shearing parameters. The results give further insight to the strain and stress conditions at crack initiation during shearing, and can also be used to validate models of the shearing process.