923 resultados para Pre-design
A simulation-based design method to transfer surface mount RF system to flip-chip die implementation
Resumo:
The flip-chip technology is a high chip density solution to meet the demand for very large scale integration design. For wireless sensor node or some similar RF applications, due to the growing requirements for the wearable and implantable implementations, flip-chip appears to be a leading technology to realize the integration and miniaturization. In this paper, flip-chip is considered as part of the whole system to affect the RF performance. A simulation based design is presented to transfer the surface mount PCB board to the flip-chip die package for the RF applications. Models are built by Q3D Extractor to extract the equivalent circuit based on the parasitic parameters of the interconnections, for both bare die and wire-bonding technologies. All the parameters and the PCB layout and stack-up are then modeled in the essential parts' design of the flip-chip RF circuit. By implementing simulation and optimization, a flip-chip package is re-designed by the parameters given by simulation sweep. Experimental results fit the simulation well for the comparison between pre-optimization and post-optimization of the bare die package's return loss performance. This design method could generally be used to transfer any surface mount PCB to flip-chip package for the RF systems or to predict the RF specifications of a RF system using the flip-chip technology.
Resumo:
In the last decade, we have witnessed the emergence of large, warehouse-scale data centres which have enabled new internet-based software applications such as cloud computing, search engines, social media, e-government etc. Such data centres consist of large collections of servers interconnected using short-reach (reach up to a few hundred meters) optical interconnect. Today, transceivers for these applications achieve up to 100Gb/s by multiplexing 10x 10Gb/s or 4x 25Gb/s channels. In the near future however, data centre operators have expressed a need for optical links which can support 400Gb/s up to 1Tb/s. The crucial challenge is to achieve this in the same footprint (same transceiver module) and with similar power consumption as today’s technology. Straightforward scaling of the currently used space or wavelength division multiplexing may be difficult to achieve: indeed a 1Tb/s transceiver would require integration of 40 VCSELs (vertical cavity surface emitting laser diode, widely used for short‐reach optical interconnect), 40 photodiodes and the electronics operating at 25Gb/s in the same module as today’s 100Gb/s transceiver. Pushing the bit rate on such links beyond today’s commercially available 100Gb/s/fibre will require new generations of VCSELs and their driver and receiver electronics. This work looks into a number of state‐of-the-art technologies and investigates their performance restraints and recommends different set of designs, specifically targeting multilevel modulation formats. Several methods to extend the bandwidth using deep submicron (65nm and 28nm) CMOS technology are explored in this work, while also maintaining a focus upon reducing power consumption and chip area. The techniques used were pre-emphasis in rising and falling edges of the signal and bandwidth extensions by inductive peaking and different local feedback techniques. These techniques have been applied to a transmitter and receiver developed for advanced modulation formats such as PAM-4 (4 level pulse amplitude modulation). Such modulation format can increase the throughput per individual channel, which helps to overcome the challenges mentioned above to realize 400Gb/s to 1Tb/s transceivers.
Resumo:
BACKGROUND: Controversies exist regarding the indications for unicompartmental knee arthroplasty. The objective of this study is to report the mid-term results and examine predictors of failure in a metal-backed unicompartmental knee arthroplasty design. METHODS: At a mean follow-up of 60 months, 80 medial unicompartmental knee arthroplasties (68 patients) were evaluated. Implant survivorship was analyzed using Kaplan-Meier method. The Knee Society objective and functional scores and radiographic characteristics were compared before surgery and at final follow-up. A Cox proportional hazard model was used to examine the association of patient's age, gender, obesity (body mass index > 30 kg/m2), diagnosis, Knee Society scores and patella arthrosis with failure. RESULTS: There were 9 failures during the follow up. The mean Knee Society objective and functional scores were respectively 49 and 48 points preoperatively and 95 and 92 points postoperatively. The survival rate was 92% at 5 years and 84% at 10 years. The mean age was younger in the failure group than the non-failure group (p < 0.01). However, none of the factors assessed was independently associated with failure based on the results from the Cox proportional hazard model. CONCLUSION: Gender, pre-operative diagnosis, preoperative objective and functional scores and patellar osteophytes were not independent predictors of failure of unicompartmental knee implants, although high body mass index trended toward significance. The findings suggest that the standard criteria for UKA may be expanded without compromising the outcomes, although caution may be warranted in patients with very high body mass index pending additional data to confirm our results. LEVEL OF EVIDENCE: IV.
Resumo:
Today, tuberculosis (TB) still remains one of the main global causes of mortality and morbidity, and an effective vaccine against both TB disease and Mycobacterium tuberculosis infection is essential to reach the updated post-2015 Millennium development goal of eradicating TB by 2050. During the last two decades much knowledge has accumulated on the pathogenesis of TB and the immune responses to infection by M. tuberculosis. Furthermore, many vaccine candidates are under development, and close to 20 of them have entered clinical assessment at various levels. Nevertheless, the M. tuberculosis-host interaction is very complex, and the full complexity of this interaction is still not sufficiently well understood to develop novel, rationally designed vaccines. However, some of the recent knowledge is now integrated into the design of various types of vaccine candidates to be used either as pre-exposure, as post-exposure or as therapeutic vaccines, as will be discussed in this paper.
Resumo:
This case study explored a single in-depth narrative of an episode of crisis. The participant, an English Jewish man in his late thirties (Guy), was selected using a ‘random purposeful’ design from a sample who had previously participated in a study on the experience of crisis in pre-midlife adulthood. From a subgroup of participants chosen for giving full accounts of both inner and outer dimensions of crisis, the individual was selected randomly. Data collection comprised two interviews followed by an email discussion. The crisis occurred in Guy’s late thirties, just before the midlife transition, and so can be considered a ‘pre-midlife’ crisis. It subsumed the period surrounding leaving a high-profile banking career and a dysfunctional marriage, and the ensuing attempts to rebuild life after this difficult and emotional period. Qualitative analysis found four trajectories of personal transformation over the course of the episode: Firstly there was a shift away from the use of a conventional persona to a more spontaneous and ‘authentic’ expression of self; secondly there was a move away from materialistic values toward relational values; thirdly a developing capacity to reflect on himself and his actions; fourthly an emerging feminine component of his personality. The case study portrays an extraordinary event in the life of an ordinary man approaching middle age. It illustrates the transformative nature of crisis in ordinary lives, the dramatic nature of narrative surrounding crisis, and also illustrates existing theory about the nature of adult crises.
Resumo:
OBJECTIVE
To assess the relationship between glycemic control, pre-eclampsia, and gestational hypertension in women with type 1 diabetes.
RESEARCH DESIGN AND METHODS
Pregnancy outcome (pre-eclampsia or gestational hypertension) was assessed prospectively in 749 women from the randomized controlled Diabetes and Pre-eclampsia Intervention Trial (DAPIT). HbA1c (A1C) values were available up to 6 months before pregnancy (n = 542), at the first antenatal visit (median 9 weeks) (n = 721), at 26 weeks’ gestation (n = 592), and at 34 weeks’ gestation (n = 519) and were categorized as optimal (<6.1%: referent), good (6.1–6.9%), moderate (7.0–7.9%), and poor (=8.0%) glycemic control, respectively.
RESULTS
Pre-eclampsia and gestational hypertension developed in 17 and 11% of pregnancies, respectively. Women who developed pre-eclampsia had significantly higher A1C values before and during pregnancy compared with women who did not develop pre-eclampsia (P < 0.05, respectively). In early pregnancy, A1C =8.0% was associated with a significantly increased risk of pre-eclampsia (odds ratio 3.68 [95% CI 1.17–11.6]) compared with optimal control. At 26 weeks’ gestation, A1C values =6.1% (good: 2.09 [1.03–4.21]; moderate: 3.20 [1.47–7.00]; and poor: 3.81 [1.30–11.1]) and at 34 weeks’ gestation A1C values =7.0% (moderate: 3.27 [1.31–8.20] and poor: 8.01 [2.04–31.5]) significantly increased the risk of pre-eclampsia compared with optimal control. The adjusted odds ratios for pre-eclampsia for each 1% decrement in A1C before pregnancy, at the first antenatal visit, at 26 weeks’ gestation, and at 34 weeks’ gestation were 0.88 (0.75–1.03), 0.75 (0.64–0.88), 0.57 (0.42–0.78), and 0.47 (0.31–0.70), respectively. Glycemic control was not significantly associated with gestational hypertension.
CONCLUSIONS
Women who developed pre-eclampsia had significantly higher A1C values before and during pregnancy. These data suggest that optimal glycemic control both early and throughout pregnancy may reduce the risk of pre-eclampsia in women with type 1 diabetes.
Resumo:
Objective Increased advanced glycation end-products (AGEs) and their soluble receptors (sRAGE) have been implicated in the pathogenesis of pre-eclampsia (PE). However, this association has not been elucidated in pregnancies complicated by diabetes. We aimed to investigate the serum levels of these factors in pregnant women with Type 1 diabetes mellitus (T1DM), a condition associated with a four-fold increase in PE. Design Prospective study in women with T1DM at 12.2 ± 1.9, 21.6 ± 1.5 and 31.5 ± 1.7 weeks of gestation [mean ± standard deviation (SD); no overlap] before PE onset. Setting Antenatal clinics. Population Pregnant women with T1DM (n = 118; 26 developed PE) and healthy nondiabetic pregnant controls (n = 21). Methods Maternal serum levels of sRAGE (total circulating pool), N -(carboxymethyl)lysine (CML), hydroimidazolone (methylglyoxal-modified proteins) and total AGEs were measured by immunoassays. Main outcome measures Serum sRAGE and AGEs in pregnant women with T1DM who subsequently developed PE (DM PE+) versus those who remained normotensive (DM PE-). Results In DM PE+ versus DM PE-, sRAGE was significantly lower in the first and second trimesters, prior to the clinical manifestation of PE (P <0.05). Further, reflecting the net sRAGE scavenger capacity, sRAGE:hydroimidazolone was significantly lower in the second trimester (P <0.05) and sRAGE:AGE and sRAGE:CML tended to be lower in the first trimester (P <0.1) in women with T1DM who subsequently developed PE versus those who did not. These conclusions persisted after adjusting for prandial status, glycated haemoglobin (HbA1c), duration of diabetes, parity and mean arterial pressure as covariates. Conclusions In the early stages of pregnancy, lower circulating sRAGE levels, and the ratio of sRAGE to AGEs, may be associated with the subsequent development of PE in women with T1DM. © 2012 The Authors BJOG An International Journal of Obstetrics and Gynaecology © 2012 RCOG.
Resumo:
Reagent pre-storage in a microfluidic chip can enhance operator convenience, simplify the system design, reduce the cost of storage and shipment, and avoid the risk of cross-contamination. Although dry reagents have long been used in lateral flow immunoassays, they have rarely been used for nucleic acid-based point-of-care (POC) assays due to the lack of reliable techniques to dehydrate and store fragile molecules involved in the reaction. In this study, we describe a simple and efficient method for prolonged on-chip storage of PCR reagents. The method is based on gelification of all reagents required for PCR as a ready-to-use product. The approach was successfully implemented in a lab-on-a-foil system, and the gelification process was automated for mass production. Integration of reagents on-chip by gelification greatly facilitated the development of easy-to-use lab-on-a-chip (LOC) devices for fast and cost-effective POC analysis.
Resumo:
In this paper, we propose a novel finite impulse response (FIR) filter design methodology that reduces the number of operations with a motivation to reduce power consumption and enhance performance. The novelty of our approach lies in the generation of filter coefficients such that they conform to a given low-power architecture, while meeting the given filter specifications. The proposed algorithm is formulated as a mixed integer linear programming problem that minimizes chebychev error and synthesizes coefficients which consist of pre-specified alphabets. The new modified coefficients can be used for low-power VLSI implementation of vector scaling operations such as FIR filtering using computation sharing multiplier (CSHM). Simulations in 0.25um technology show that CSHM FIR filter architecture can result in 55% power and 34% speed improvement compared to carry save multiplier (CSAM) based filters.
Resumo:
A meta-analysis was undertaken on a form of cooperative learning, peer tutoring. The effects of experimental design on outcomes were explored, as measured by Effect Size (ES). 185 studies were included in the meta-analysis. Highest ES were reported for quasi-experimental studies. ES reduced as experimental design moved from single pre-test factor matched, to multiple-factor matched randomized controlled trials. ES reduced when designs used standardised, rather than self-designed measures, The implications for future meta-analyses and research in cooperative learning are explored.
Resumo:
PURPOSE: To evaluate the prevalence and causes of visual impairment among Chinese children aged 3 to 6 years in Beijing. DESIGN: Population-based prevalence survey. METHODS: Presenting and pinhole visual acuity were tested using picture optotypes or, in children with pinhole vision < 6/18, a Snellen tumbling E chart. Comprehensive eye examinations and cycloplegic refraction were carried out for children with pinhole vision < 6/18 in the better-seeing eye. RESULTS: All examinations were completed on 17,699 children aged 3 to 6 years (95.3% of sample). Subjects with bilateral correctable low vision (presenting vision < 6/18 correctable to >or= 6/18) numbered 57 (0.322%; 95% confidence interval [CI], 0.237% to 0.403%), while 14 (0.079%; 95% CI, 0.038% to 0.120%) had bilateral uncorrectable low vision (best-corrected vision of < 6/18 and >or= 3/60), and 5 subjects (0.028%; 95% CI, 0.004% to 0.054%) were bilaterally blind (best-corrected acuity < 3/60). The etiology of 76 cases of visual impairment included: refractive error in 57 children (75%), hereditary factors (microphthalmos, congenital cataract, congenital motor nystagmus, albinism, and optic nerve disease) in 13 children (17.1 %), amblyopia in 3 children (3.95%), and cortical blindness in 1 child (1.3%). The cause of visual impairment could not be established in 2 (2.63%) children. The prevalence of visual impairment did not differ by gender, but correctable low vision was significantly (P < .0001) more common among urban as compared with rural children. CONCLUSION: The leading causes of visual impairment among Chinese preschool-aged children are refractive error and hereditary eye diseases. A higher prevalence of refractive error is already present among urban as compared with rural children in this preschool population.
Resumo:
A meta-analysis was undertaken on a form of cooperative learning, peer tutoring. The effects of experimental design on outcomes were explored, as measured by Effect Size (ES). Forty three articles with 82 effect size studies were included in the meta-analysis. Highest ES were reported for quasi-experimental studies. ES reduced as experimental design moved from single pre-test factor matched, to multiple-factor matched randomized controlled trials. ES reduced when designs used standardised, rather than self-designed measures. The implications for future meta-analyses and research in cooperative learning are explored.
Resumo:
Objectives There is evidence from neuroscience, cognitive psychology and educational research that the delivery of a stimulus in a spaced format (over time) rather than a massed format (all at once) leads to more effective learning. This project aimed to pilot spaced learning materials using various spacing lengths for GCSE science by exploring the feasibility of introducing spaced leaning into regular classrooms and by evaluating teacher fidelity to the materials. The spaced learning methods will then be compared with traditional science revision techniques and a programme manual will be produced. Design A feasibility study. Methods A pilot study (4 schools) was carried out to examine the feasibility and teacher fidelity to the materials, using pupil workshops and teacher interviews. A subsequent random assignment experimental study (12 schools) will involve pre and post testing of students on a science attainment measure and a post-test implementation questionnaire. Results The literature review found that longer spacing intervals between repetitions of material (>24 hours) may be optimal for long term memory formation than shorter intervals. A logic model was developed to inform the design of various programme variants for the pilot and experimental study. This paper will report qualitative data from the initial pilot study. Conclusions The paper uses this research project as an example to explain the importance of conducting pilot work and small scale experimental studies to explore the feasibility and inform the design of educational interventions, rather than prematurely moving to RCT type studies.
Resumo:
Desde a Pré-História que a escolha de materiais esteve relacionada com a Arte. Mais tarde, durante a Idade Moderna vai ganhando uma importância cada vez maior. Atingida que foi a Idade Contemporânea, nomeadamente após a Revolução Industrial e durante a Segunda Guerra Mundial, devido ao aumento do número de materiais disponíveis, é que se pode falar de uma verdadeira seleção de materiais. É também após a Revolução Industrial que se clarificam as relações entre a evolução dos materiais e os movimentos e correntes das Artes Plásticas. Neste contexto, estudaram-se as interligações entre o processo de design e as metodologias de seleção, assim como as diversas tipologias de ferramentas existentes para esse efeito. Deste estudo, consideradas as respetivas vantagens e limitações, foi possível identificar bases de dados essencialmente técnicas, ou ao invés, ferramentas para inspiração com muitas imagens e pouca informação sobre as propriedades dos materiais. Para completar este levantamento crítico sobre processos e ferramentas de seleção, inquiriram-se cinquenta e três profissionais que trabalhavam em diferentes gabinetes de design portugueses. As perguntas dirigidas aos designers portugueses versaram sobre problemas relacionados com a escolha de materiais, abrangendo o tipo de matériasprimas empregues, processos utilizados e a qualidade da informação obtida. Na sequência deste estudo, verificou-se a existência de diversas lacunas relativamente aos meios disponíveis, rotinas de seleção, qualidade da informação existente e metodologias utilizadas. Foi neste contexto que se iniciou o projeto de criação de uma nova metodologia suportada por uma ferramenta digital. Os principais aspetos inovadores são: uma melhor interligação entre a metodologia de design e o processo de seleção de materiais e a sua sincronização; a informação necessária em cada etapa e o destaque dos fatores catalisadores da seleção de materiais. Outro elemento inovador foi a conjugação de três formas deferentes de seleção de materiais numa só ferramenta (a geral, a visual e a específica) e a hipótese de aceder a diferentes graus de informação. A metodologia, no contexto dos recursos disponíveis, foi materializada sob a forma de ferramenta digital (ptmaterials.com). O protótipo foi aferido com testes de usabilidade de cariz heurístico, com a participação de dezanove utilizadores. Foram detetadas diversas falhas de interação que condicionaram a liberdade e o controlo da navegação no seio da interface. Os utilizadores também mencionaram a existência de lacunas na prevenção de erros e a ligação do sistema à lógica habitual de outras aplicações já existentes. No entanto, também constituiu um estímulo a circunstância da maioria dos designers avaliarem o sistema como eficaz, eficiente, satisfatório e confirmarem o interesse da existência dos três tipos de seleção. Posteriormente, ao analisar os restantes resultados dos testes de usabilidade, também foram evidenciadas as vantagens dos diferentes tipos de informação disponibilizada e a utilidade de uma ferramenta desta natureza para a Indústria e Economia Nacionais. Esta ferramenta é apenas um ponto de partida, existindo espaço para melhorar a proposta, apesar da concretização de uma ferramenta digital ser um trabalho de grande complexidade. Não obstante se tratar de um protótipo, esta ferramenta está adequada aos dias de hoje e é passível de evoluir no futuro, tendo também a possibilidade de vir a ser preferencialmente utilizada por outros países de língua portuguesa.