941 resultados para Differences-in-Differences method


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The existing assignment problems for assigning n jobs to n individuals are limited to the considerations of cost or profit measured as crisp. However, in many real applications, costs are not deterministic numbers. This paper develops a procedure based on Data Envelopment Analysis method to solve the assignment problems with fuzzy costs or fuzzy profits for each possible assignment. It aims to obtain the points with maximum membership values for the fuzzy parameters while maximizing the profit or minimizing the assignment cost. In this method, a discrete approach is presented to rank the fuzzy numbers first. Then, corresponding to each fuzzy number, we introduce a crisp number using the efficiency concept. A numerical example is used to illustrate the usefulness of this new method. © 2012 Operational Research Society Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An international round robin study of the stability of fast pyrolysis bio-oil was undertaken. Fifteen laboratories in five different countries contributed. Two bio-oil samples were distributed to the laboratories for stability testing and further analysis. The stability test was defined in a method provided with the bio-oil samples. Viscosity measurement was a key input. The change in viscosity of a sealed sample of bio-oil held for 24 h at 80 °C was the defining element of stability. Subsequent analyses included ultimate analysis, density, moisture, ash, filterable solids, and TAN/pH determination, and gel permeation chromatography. The results showed that kinematic viscosity measurement was more generally conducted and more reproducibly performed versus dynamic viscosity measurement. The variation in the results of the stability test was great and a number of reasons for the variation were identified. The subsequent analyses proved to be at the level of reproducibility, as found in earlier round robins on bio-oil analysis. Clearly, the analyses were more straightforward and reproducible with a bio-oil sample low in filterable solids (0.2%), compared to one with a higher (2%) solids loading. These results can be helpful in setting standards for use of bio-oil, which is just coming into the marketplace. © 2012 American Chemical Society.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this letter, a novel phase noise estimation scheme has been proposed for coherent optical orthogonal frequency division multiplexing systems, the quasi-pilot-aided method. In this method, the phases of transmitted pilot subcarriers are deliberately correlated to the phases of data subcarriers. Accounting for this correlation in the receiver allows the required number of pilots needed for a sufficient estimation and compensation of phase noise to be reduced by a factor of 2 in comparison with the traditional pilot-aided phase noise estimation method. We carried out numerical simulation of a 40 Gb/s single polarization transmission system, and the outcome of the investigation indicates that by applying quasi-pilot-aided phase estimation, only four pilot subcarriers are needed for effective phase noise compensation. © 2014 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

2002 Mathematics Subject Classification: 62F35, 62F15.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nanoparticles offer an ideal platform for the delivery of small molecule drugs, subunit vaccines and genetic constructs. Besides the necessity of a homogenous size distribution, defined loading efficiencies and reasonable production and development costs, one of the major bottlenecks in translating nanoparticles into clinical application is the need for rapid, robust and reproducible development techniques. Within this thesis, microfluidic methods were investigated for the manufacturing, drug or protein loading and purification of pharmaceutically relevant nanoparticles. Initially, methods to prepare small liposomes were evaluated and compared to a microfluidics-directed nanoprecipitation method. To support the implementation of statistical process control, design of experiment models aided the process robustness and validation for the methods investigated and gave an initial overview of the size ranges obtainable in each method whilst evaluating advantages and disadvantages of each method. The lab-on-a-chip system resulted in a high-throughput vesicle manufacturing, enabling a rapid process and a high degree of process control. To further investigate this method, cationic low transition temperature lipids, cationic bola-amphiphiles with delocalized charge centers, neutral lipids and polymers were used in the microfluidics-directed nanoprecipitation method to formulate vesicles. Whereas the total flow rate (TFR) and the ratio of solvent to aqueous stream (flow rate ratio, FRR) was shown to be influential for controlling the vesicle size in high transition temperature lipids, the factor FRR was found the most influential factor controlling the size of vesicles consisting of low transition temperature lipids and polymer-based nanoparticles. The biological activity of the resulting constructs was confirmed by an invitro transfection of pDNA constructs using cationic nanoprecipitated vesicles. Design of experiments and multivariate data analysis revealed the mathematical relationship and significance of the factors TFR and FRR in the microfluidics process to the liposome size, polydispersity and transfection efficiency. Multivariate tools were used to cluster and predict specific in-vivo immune responses dependent on key liposome adjuvant characteristics upon delivery a tuberculosis antigen in a vaccine candidate. The addition of a low solubility model drug (propofol) in the nanoprecipitation method resulted in a significantly higher solubilisation of the drug within the liposomal bilayer, compared to the control method. The microfluidics method underwent scale-up work by increasing the channel diameter and parallelisation of the mixers in a planar way, resulting in an overall 40-fold increase in throughput. Furthermore, microfluidic tools were developed based on a microfluidics-directed tangential flow filtration, which allowed for a continuous manufacturing, purification and concentration of liposomal drug products.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Query processing is a commonly performed procedure and a vital and integral part of information processing. It is therefore important and necessary for information processing applications to continuously improve the accessibility of data sources as well as the ability to perform queries on those data sources. ^ It is well known that the relational database model and the Structured Query Language (SQL) are currently the most popular tools to implement and query databases. However, a certain level of expertise is needed to use SQL and to access relational databases. This study presents a semantic modeling approach that enables the average user to access and query existing relational databases without the concern of the database's structure or technicalities. This method includes an algorithm to represent relational database schemas in a more semantically rich way. The result of which is a semantic view of the relational database. The user performs queries using an adapted version of SQL, namely Semantic SQL. This method substantially reduces the size and complexity of queries. Additionally, it shortens the database application development cycle and improves maintenance and reliability by reducing the size of application programs. Furthermore, a Semantic Wrapper tool illustrating the semantic wrapping method is presented. ^ I further extend the use of this semantic wrapping method to heterogeneous database management. Relational, object-oriented databases and the Internet data sources are considered to be part of the heterogeneous database environment. Semantic schemas resulting from the algorithm presented in the method were employed to describe the structure of these data sources in a uniform way. Semantic SQL was utilized to query various data sources. As a result, this method provides users with the ability to access and perform queries on heterogeneous database systems in a more innate way. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This inquiry looks it to think of the body through Hedonistic Philosophy of Michel Onfray. To compose his written, the philosopher launches strong criticism to the asceticism (constituted by the philosophical tradition and by the monoteístas religions), accusing it of despise the body and the pleasure of his teachings, anchored by the Christian moral. However, his philosophy defends the hedonismo and emphasizes the pleasure as an ethical / moral beginning, which aims the other as much as the individual himself, elevating the body and his potentialities through five senses. The contemplated philosophy allowed us think on the Physical Education, area wich, traditionally, was tied to the execution of disciplinary tasks of the body, disregarding the sensibility of his pedagogic practice. In this scenery, there is an ideal of body that attacks us daily, intensified by this area, which turns in the ethical problem of the body. From then, we launch our questions: From Michel Onfray philosophy, how the body shapes between the asceticism and the hedonismo?, What are the possible implications for the Physical Education? Ruled in the method of the Hedonistic Materialism, proposed by Michel Onfray, we think about this inquiry on two central points that contemplate our categories of study to be known: Glorious body and Loose living Body. We resort to Michel Onfray´s books, as well as, interviews given by the author in magazines / newspapers to help in our inquiry intentions. For the approach ethics / esthetics in the Physical Education, we use the texts of Silvino Santin and Hugo Lovisolo. Besides, we brought the cinema dialog. We classify this inquiry as a true Odyssey that transported us to unknown places and as a return to other already visited. This travel provided teachings that will help our wisdom on how survive the life, alerting us for the worship to the body like the cultivation of ourselves, and not as search of reaching physical standards stipulated by the society in force.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis, research for tsunami remote sensing using the Global Navigation Satellite System-Reflectometry (GNSS-R) delay-Doppler maps (DDMs) is presented. Firstly, a process for simulating GNSS-R DDMs of a tsunami-dominated sea sur- face is described. In this method, the bistatic scattering Zavorotny-Voronovich (Z-V) model, the sea surface mean square slope model of Cox and Munk, and the tsunami- induced wind perturbation model are employed. The feasibility of the Cox and Munk model under a tsunami scenario is examined by comparing the Cox and Munk model- based scattering coefficient with the Jason-1 measurement. A good consistency be- tween these two results is obtained with a correlation coefficient of 0.93. After con- firming the applicability of the Cox and Munk model for a tsunami-dominated sea, this work provides the simulations of the scattering coefficient distribution and the corresponding DDMs of a fixed region of interest before and during the tsunami. Fur- thermore, by subtracting the simulation results that are free of tsunami from those with presence of tsunami, the tsunami-induced variations in scattering coefficients and DDMs can be clearly observed. Secondly, a scheme to detect tsunamis and estimate tsunami parameters from such tsunami-dominant sea surface DDMs is developed. As a first step, a procedure to de- termine tsunami-induced sea surface height anomalies (SSHAs) from DDMs is demon- strated and a tsunami detection precept is proposed. Subsequently, the tsunami parameters (wave amplitude, direction and speed of propagation, wavelength, and the tsunami source location) are estimated based upon the detected tsunami-induced SSHAs. In application, the sea surface scattering coefficients are unambiguously re- trieved by employing the spatial integration approach (SIA) and the dual-antenna technique. Next, the effective wind speed distribution can be restored from the scat- tering coefficients. Assuming all DDMs are of a tsunami-dominated sea surface, the tsunami-induced SSHAs can be derived with the knowledge of background wind speed distribution. In addition, the SSHA distribution resulting from the tsunami-free DDM (which is supposed to be zero) is considered as an error map introduced during the overall retrieving stage and is utilized to mitigate such errors from influencing sub- sequent SSHA results. In particular, a tsunami detection procedure is conducted to judge the SSHAs to be truly tsunami-induced or not through a fitting process, which makes it possible to decrease the false alarm. After this step, tsunami parameter estimation is proceeded based upon the fitted results in the former tsunami detec- tion procedure. Moreover, an additional method is proposed for estimating tsunami propagation velocity and is believed to be more desirable in real-world scenarios. The above-mentioned tsunami-dominated sea surface DDM simulation, tsunami detection precept and parameter estimation have been tested with simulated data based on the 2004 Sumatra-Andaman tsunami event.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The present research has character exploratory, bibliographic and qualitative. It is based in consolidated scientific arguments in cognitive theories inspired in constructivist method and, under this perspective proposes to develop a didactic guide oriented to students of courses MOOCs - Massive Open Online Courses that will make it possible to maximize the utilization and the assimilation of the knowledge available in these courses. Intends also prepare these students in practice of a methodology of storage that enables the knowledge acquired are not lost nor be forgotten over the course of time. The theoretical framework, based on the theories of Meaningful Learning (Ausubel), the Genetic Epistemology (Piaget), Socioconstructivist (Vigotsky) and the Multimedia Learning (Mayer), subsidizes the understanding of important concepts such as meaningful learning, previous knowledge, and conceptual maps. Supported by fundamental contribution of the Theory of Categories, which are inter-related to concepts applicable to teaching methodology supported by use of structured knowledge maps in the establishment of the binomial teaching-learning; and with valuable study performed by teachers Luciano Lima (UFU) and Rubens Barbosa Filho (UEMS) that culminated with the development of Exponential Effective Memorization Method in Binary Base (Double MEB).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Electron beam-induced deposition (EBID) is a direct write process where an electron beam locally decomposes a precursor gas leaving behind non-volatile deposits. It is a fast and relatively in-expensive method designed to develop conductive (metal) or isolating (oxide) nanostructures. Unfortunately the EBID process results in deposition of metal nanostructures with relatively high resistivity because the gas precursors employed are hydrocarbon based. We have developed deposition protocols using novel gas-injector system (GIS) with a carbon free Pt precursor. Interconnect type structures were deposited on preformed metal architectures. The obtained structures were analysed by cross-sectional TEM and their electrical properties were analysed ex-situ using four point probe electrical tests. The results suggest that both the structural and electrical characteristics differ significantly from those of Pt interconnects deposited by conventional hydrocarbon based precursors, and show great promise for the development of low resistivity electrical contacts.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Moderate-to-vigorous physical activity (MVPA) is an important determinant of children’s physical health, and is commonly measured using accelerometers. A major limitation of accelerometers is non-wear time, which is the time the participant did not wear their device. Given that non-wear time is traditionally discarded from the dataset prior to estimating MVPA, final estimates of MVPA may be biased. Therefore, alternate approaches should be explored. OBJECTIVES: The objectives of this thesis were to 1) develop and describe an imputation approach that uses the socio-demographic, time, health, and behavioural data from participants to replace non-wear time accelerometer data, 2) determine the extent to which imputation of non-wear time data influences estimates of MVPA, and 3) determine if imputation of non-wear time data influences the associations between MVPA, body mass index (BMI), and systolic blood pressure (SBP). METHODS: Seven days of accelerometer data were collected using Actical accelerometers from 332 children aged 10-13. Three methods for handling missing accelerometer data were compared: 1) the “non-imputed” method wherein non-wear time was deleted from the dataset, 2) imputation dataset I, wherein the imputation of MVPA during non-wear time was based upon socio-demographic factors of the participant (e.g., age), health information (e.g., BMI), and time characteristics of the non-wear period (e.g., season), and 3) imputation dataset II wherein the imputation of MVPA was based upon the same variables as imputation dataset I, plus organized sport information. Associations between MVPA and health outcomes in each method were assessed using linear regression. RESULTS: Non-wear time accounted for 7.5% of epochs during waking hours. The average minutes/day of MVPA was 56.8 (95% CI: 54.2, 59.5) in the non-imputed dataset, 58.4 (95% CI: 55.8, 61.0) in imputed dataset I, and 59.0 (95% CI: 56.3, 61.5) in imputed dataset II. Estimates between datasets were not significantly different. The strength of the relationship between MVPA with BMI and SBP were comparable between all three datasets. CONCLUSION: These findings suggest that studies that achieve high accelerometer compliance with unsystematic patterns of missing data can use the traditional approach of deleting non-wear time from the dataset to obtain MVPA measures without substantial bias.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Due to the growing concerns associated with fossil fuels, emphasis has been placed on clean and sustainable energy generation. This has resulted in the increase in Photovoltaics (PV) units being integrated into the utility system. The integration of PV units has raised some concerns for utility power systems, including the consequences of failing to detect islanding. Numerous methods for islanding detection have been introduced in literature. They can be categorized into local methods and remote methods. The local methods are categorically divided into passive and active methods. Active methods generally have smaller Non-Detection Zone (NDZ) but the injecting disturbances will slightly degrade the power quality and reliability of the power system. Slip Mode Frequency Shift Islanding Detection Method (SMS IDM) is an active method that uses positive feedback for islanding detection. In this method, the phase angle of the converter is controlled to have a sinusoidal function of the deviation of the Point of Common Coupling (PCC) voltage frequency from the nominal grid frequency. This method has a non-detection zone which means it fails to detect islanding for specific local load conditions. If the SMS IDM employs a different function other than the sinusoidal function for drifting the phase angle of the inverter, its non-detection zone could be smaller. In addition, Advanced Slip Mode Frequency Shift Islanding Detection Method (Advanced SMS IDM), which has been introduced in this thesis, eliminates the non-detection zone of the SMS IDM. In this method the parameters of SMS IDM change based on the local load impedance value. Moreover, the stability of the system is investigated by developing the dynamical equations of the system for two operation modes; grid connected and islanded mode. It is mathematically proven that for some loading conditions the nominal frequency is an unstable point and the operation frequency slides to another stable point, while for other loading conditions the nominal frequency is the only stable point of the system upon islanding occurring. Simulation and experimental results show the accuracy of the proposed methods in detection of islanding and verify the validity of the mathematical analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Angiotensin-converting enzyme (EC3.4.15. I; ACE), isa membrane-bounddipeptidyl carboxypeptidase that mediates the cleavage of the C-terminal dipeptide His-Leu of the decapeptide angiotensin, generating the most powerful endogenous vaso-constricting angiotensin.
Some ACE inhibitors, such as Captopril, have been used as anti-hypertensive drugs. Moreover in recent years, large quantities of ACE inhibitors have been identijied and isolated from peptides derivedfrom food material such as casein, soy protein, jish protein and so on. Functional food with hypotensive effect has been developed on the basis of these works.
Typicalprocedures for screening hypotensive peptides offood origins are separationof products of peptic and tryptic digestion of proteins followed by inhibitory activitydetermination of each fraction. A method developed by Cushman has been the mostwidely used, in which ACE activity is determined by the amount of hippuric acid
generated as a product of enzymatic reaction of ACE with tripeptide of hippuryl-Lhistidyl-L-leucine. Hippuric acid is determined spectrophotometrically at 228 nm after its isolation from the reaction system by ethylacetate extraction, which not only requires alarge quantity of reagent but also results in large error.
An improved method based on Cushman ’s method is proposed in this paper. In this method, an enzymatic reaction system is based on Cushman’s method, while isolation and determination of hippuric acid is performed by medium perjormance gel chromatography on a Toyopearl HW-40s column. Due to the size exclusion nature of the column with somewhat hydrophobic properties, complete separation of four existing fractions in the reaction system is obtained within a smallfraction of the time necessary in Cushman’s method, with ideal reproducibility.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introdução: Em Portugal, são escassos os instrumentos validados para a população adolescente, que avaliem o importante construto da resiliência. Assim, o principal objetivo deste estudo consistiu na adaptação e validação preliminar da Escala de Avaliação do EU Resiliente (EAER) para adolescentes portugueses. Como segundo objetivo pretendemos, ainda, explorar as associações, na mesma amostra, entre a resiliência, o autodano e a ideação suicida na adolescência. Método: A amostra foi constituída por 226 adolescentes (sexo masculino, n = 139, 61,5%), entre os 12 e os 18 anos, que preencheram um protocolo composto por um questionário sociodemográfico, pela Escala de Avaliação do EU Resiliente (EAER), pelo Questionário de Impulso, Autodano e Ideação Suicida na Adolescência (QIAIS-A) e pela Escala de autoconceito. Resultados: Os resultados obtidos mostraram que a EAER possui boa fidelidade/consistência interna (α = 0,857) e boa estabilidade temporal (r = 0,720). Uma análise de componentes principais mostrou que a EAER apresenta três fatores: fator suporte externo, fator forças pessoais internas e fator estratégias de coping. Encontraram-se correlações negativas entre a resiliência e o autodano e ideação suicida e correlações positivas entre a resiliência e o autoconceito, confirmando-se a validade divergente e convergente da EAER. Verificaram-se níveis elevados de resiliência nos adolescentes da nossa amostra (M = 58,69; DP = 6,67). Na amostra total, 61,5% (n = 139) apresentou ideação suicida e 26,5% (n = 60) apresentou comportamentos de autodano. Conclusão: No seu conjunto, a EAER possui boas características psicométricas, pelo que pode ser considerada uma escala válida e útil e que pode ser usada com segurança na avaliação da resiliência em adolescentes portugueses. Com este estudo alargámos o leque de instrumentos válidos para a medição da resiliência em adolescentes e contribuímos para o avanço da investigação na área da adolescência em Portugal. / Introduction: In Portugal, there are few validated instruments to the adolescent population, to assess the important construct of resilience. Thus, the main objective of this study was the preliminary adaptation and validation of the Escala de Avaliação do EU Resiliente (EAER) to Portuguese adolescents. As a second objective, there is an intention to also explore the associations, on the same sample, between resilience, self-harm and suicidal ideation in adolescence. Method: The sample consisted of 226 adolescents (male, n = 139, 61.5%), between 12 and 18 years, who filled in a protocol consisting of a sociodemographic questionnaire, by the Escala de Avaliação do EU Resiliente (EAER), by the Impulse, Self-harm and Suicide Ideation Questionnaire for Adolescents (ISSIQ-A) and by the Self-concept Scale. Results: The results showed that the EAER has good fidelity/internal consistency (α = 0.857) and good temporal stability (r = 0.720). A principal component analysis showed that EAER has three factors: external support factor, internal personal strengths factor and coping strategies factor. There were negative correlations between resilience and the self-harm and suicidal ideation and positive correlations between resilience and self-concept, confirming the divergent and convergent validity of EAER. There were high levels of resilience in the adolescents of the sample (M = 58.69, SD = 6.67). In the total sample, 61.5% (n = 139) had suicidal ideation and 26,5% (n = 60) had self-harm behaviors. Conclusion: As a whole, the EAER has good psychometric properties, therefore it can be considered a valid and useful range, and can be safely used in the evaluation of resilience in Portuguese adolescents. With this study we have extended the range of valid instruments for the measurement of resilience in adolescents and contributed to the advance of research in the adolescence area in Portugal.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O presente trabalho teve como objetivo obter concentrados de ácidos graxos mono e poliinsaturados a partir do óleo branqueado de carpa (Cyprinus carpio), utilizando o método de complexação com uréia, e estabelecer as melhores condições através do estudo de seus parâmetros. Foi utilizado um planejamento experimental 23 para determinar os fatores que influenciam de forma significativa (nível de 95%) os experimentos da complexação com uréia, e verificar quais as faixas de valores desses fatores que apresentam os melhores resultados. Os fatores de estudo foram: relação de uréia-ácido graxo (2:1 e 6:1), temperatura de cristalização (4ºC e -12ºC) e tempo de cristalização (14 e 24 h). As respostas para análise estatística foram: rendimento da fração líquida (%Rend), percentual de ácidos graxos livres (%AGL) da fração líquida e o perfil de ácidos graxos. Conforme o estudo realizado, a relação uréia/AG se mostrou muito efetiva, de forma diretamente proporcional, na obtenção de concentrados de ácidos graxos monoinsaturados (AGMI) e poliinsaturados (AGPI). Pois, uma vez que ao aumentar a relação de uréia/AG resultou em um meio favorável para a inclusão dos ácidos graxos saturados (AGS) na cristalização da uréia. Por este motivo a relação (6:1) foi melhor para obter concentrados de AGMI+AGPI. A relação entre temperatura e rendimento de concentrados de ácidos graxos insaturados foi de forma inversamente proporcional, sendo que no menor valor (-12ºC) ocorreu a melhor separação de ácidos graxos saturados dos insaturados. O tempo foi significativo, porém com menor influência quando comparado com as demais variáveis de estudo. Devido a isso, o ganho em rendimento dos concentrados de ácidos graxos insaturados em relação ao custo operacional envolvidos no método de complexação com uréia não sugere que o maior tempo seja utilizado, sendo que 14 h oferece rendimentos que tendem à maior produtividade. Assim, as melhores condições para a obtenção de concentrados foram: maior relação de uréia/AG (6:1), menor temperatura (-12ºC) e menor tempo (14 h). Sendo que nestas condições as frações líquidas apresentaram rendimento em massa de até 65,4%, e seu percentual de ácidos graxos livres (%AGL) ficou em média 35,8 g/100g ácido oléico. Os ácidos graxos mono e poliinsaturados na melhor condição de complexação com uréia foram concentrados em 85,2%, e entre estes os EPA+DHA foram concentrados em 9,4%.