934 resultados para Millionaire Problem, Efficiency, Verifiability, Zero Test, Batch Equation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les travaux effectués dans le cadre de cette thèse de doctorat avaient pour but de mettre au point des nouvelles formulations d’antifongiques sous forme de nanoparticules polymériques (NP) en vue d’améliorer l’efficacité et la spécificité des traitements antifongiques sur des souches sensibles ou résistantes de Candida spp, d’Aspergillus spp et des souches de Candida albicans formant du biofilm. Dans la première partie de ce travail, nous avons synthétisé et caractérisé un polymère à base de polyester-co-polyéther branché avec du poly(éthylène glycol) (PEG-g-PLA). En plus d’être original et innovant, ce co-polymère a l’avantage d’être non-toxique et de posséder des caractéristiques de libération prolongée. Trois antifongiques couramment utilisés en clinique et présentant une biodisponibilité non optimale ont été choisis, soient deux azolés, le voriconazole (VRZ) et l’itraconazole (ITZ) et un polyène, l’amphotéricine B (AMB). Ces principes actifs (PA), en plus des problèmes d’administration, présentent aussi d’importants problèmes de toxicité. Des NP polymériques encapsulant ces PA ont été préparées par une technique d’émulsion huile-dans-l’eau (H/E) suivie d’évaporation de solvant. Une fois fabriquées, les NP ont été caractérisées et des particules de d’environ 200 nm de diamètre ont été obtenues. Les NP ont été conçues pour avoir une structure coeur/couronne avec un coeur constitué de polymère hydrophobe (PLA) et une couronne hydrophile de PEG. Une faible efficacité de chargement (1,3% m/m) a été obtenue pour la formulation VRZ encapsulé dans des NP (NP/VRZ). Toutefois, la formulation AMB encapsulée dans des NP (NP/AMB) a montré des taux de chargement satisfaisants (25,3% m/m). En effet, le caractère hydrophobe du PLA a assuré une bonne affinité avec les PA hydrophobes, particulièrement l’AMB qui est le plus hydrophobe des agents sélectionnés. Les études de libération contrôlée ont montré un relargage des PA sur plusieurs jours. La formulation NP/AMB a été testée sur un impacteur en cascade, un modèle in vitro de poumon et a permis de démontrer le potentiel de cette formulation à être administrée efficacement par voie pulmonaire. En effet, les résultats sur l’impacteur en cascade ont montré que la majorité de la formulation s’est retrouvée à l’étage de collecte correspondant au niveau bronchique, endroit où se situent majoritairement les infections fongiques pulmonaires. Dans la deuxième partie de ces travaux, nous avons testé les nouvelles formulations d’antifongiques sur des souches planctoniques de Candida spp., d’Aspergillus spp. et des souches de Candida albicans formant du biofilm selon les procédures standardisées du National Committee for Clinical Laboratory Standards (NCCLS). Les souches choisies ont démontré des résistances aux azolés et aux polyènes. Les études d’efficacité in vitro ont permis de prouver hors de tout doute que les nouvelles formulations offrent une efficacité nettement améliorée comparée à l’agent antifongique libre. Pour mettre en lumière si l’amélioration de l’efficacité antifongique était due à une internalisation des NP, nous avons évalué le comportement des NP avec les cellules de champignons. Nous avons procédé à des études qualitatives de microscopie de fluorescence sur des NP marquées avec de la rhodamine (Rh). Tel qu’attendu, les NP ont montré une localisation intracellulaire. Pour exclure la possibilité d’une simple adhésion des NP à la surface des levures, nous avons aussi confirmé leur internalisation en microscopie confocale de fluorescence. Il est important de noter que peu d’études à ce jour ont mis l’accent sur l’élaboration de nouvelles formulations d’antifongiques à base de polymères non toxiques destinées aux traitements des mycoses, donnant ainsi une grande valeur et originalité aux travaux effectués dans cette thèse. Les résultats probants obtenus ouvrent la voie vers une nouvelle approche pour contourner les problèmes de résistances fongiques, un problème de plus en plus important dans le domaine de l’infectiologie.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simuler efficacement l'éclairage global est l'un des problèmes ouverts les plus importants en infographie. Calculer avec précision les effets de l'éclairage indirect, causés par des rebonds secondaires de la lumière sur des surfaces d'une scène 3D, est généralement un processus coûteux et souvent résolu en utilisant des algorithmes tels que le path tracing ou photon mapping. Ces techniquesrésolvent numériquement l'équation du rendu en utilisant un lancer de rayons Monte Carlo. Ward et al. ont proposé une technique nommée irradiance caching afin d'accélérer les techniques précédentes lors du calcul de la composante indirecte de l'éclairage global sur les surfaces diffuses. Krivanek a étendu l'approche de Ward et Heckbert pour traiter le cas plus complexe des surfaces spéculaires, en introduisant une approche nommée radiance caching. Jarosz et al. et Schwarzhaupt et al. ont proposé un modèle utilisant le hessien et l'information de visibilité pour raffiner le positionnement des points de la cache dans la scène, raffiner de manière significative la qualité et la performance des approches précédentes. Dans ce mémoire, nous avons étendu les approches introduites dans les travaux précédents au problème du radiance caching pour améliorer le positionnement des éléments de la cache. Nous avons aussi découvert un problème important négligé dans les travaux précédents en raison du choix des scènes de test. Nous avons fait une étude préliminaire sur ce problème et nous avons trouvé deux solutions potentielles qui méritent une recherche plus approfondie.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le problème d'allocation de postes d'amarrage (PAPA) est l'un des principaux problèmes de décision aux terminaux portuaires qui a été largement étudié. Dans des recherches antérieures, le PAPA a été reformulé comme étant un problème de partitionnement généralisé (PPG) et résolu en utilisant un solveur standard. Les affectations (colonnes) ont été générées a priori de manière statique et fournies comme entrée au modèle %d'optimisation. Cette méthode est capable de fournir une solution optimale au problème pour des instances de tailles moyennes. Cependant, son inconvénient principal est l'explosion du nombre d'affectations avec l'augmentation de la taille du problème, qui fait en sorte que le solveur d'optimisation se trouve à court de mémoire. Dans ce mémoire, nous nous intéressons aux limites de la reformulation PPG. Nous présentons un cadre de génération de colonnes où les affectations sont générées de manière dynamique pour résoudre les grandes instances du PAPA. Nous proposons un algorithme de génération de colonnes qui peut être facilement adapté pour résoudre toutes les variantes du PAPA en se basant sur différents attributs spatiaux et temporels. Nous avons testé notre méthode sur un modèle d'allocation dans lequel les postes d'amarrage sont considérés discrets, l'arrivée des navires est dynamique et finalement les temps de manutention dépendent des postes d'amarrage où les bateaux vont être amarrés. Les résultats expérimentaux des tests sur un ensemble d'instances artificielles indiquent que la méthode proposée permet de fournir une solution optimale ou proche de l'optimalité même pour des problème de très grandes tailles en seulement quelques minutes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an imaginary-time path-integral study of the problem of quantum decay of a metastable state of a uniaxial magnetic particle placed in the magnetic field at an arbitrary angle. Our findings agree with earlier results of Zaslavskii obtained by mapping the spin Hamiltonian onto a particle Hamiltonian. In the limit of low barrier, weak dependence of the decay rate on the angle is found, except for the field which is almost normal to the anisotropy axis, where the rate is sharply peaked, and for the field approaching the parallel orientation, where the rate rapidly goes to zero. This distinct angular dependence, together with the dependence of the rate on the field strength, provides an independent test for macroscopic spin tunneling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the analytical solution of the Monte Carlo dynamics in the spherical Sherrington-Kirkpatrick model using the technique of the generating function. Explicit solutions for one-time observables (like the energy) and two-time observables (like the correlation and response function) are obtained. We show that the crucial quantity which governs the dynamics is the acceptance rate. At zero temperature, an adiabatic approximation reveals that the relaxational behavior of the model corresponds to that of a single harmonic oscillator with an effective renormalized mass.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents Reinforcement Learning (RL) approaches to Economic Dispatch problem. In this paper, formulation of Economic Dispatch as a multi stage decision making problem is carried out, then two variants of RL algorithms are presented. A third algorithm which takes into consideration the transmission losses is also explained. Efficiency and flexibility of the proposed algorithms are demonstrated through different representative systems: a three generator system with given generation cost table, IEEE 30 bus system with quadratic cost functions, 10 generator system having piecewise quadratic cost functions and a 20 generator system considering transmission losses. A comparison of the computation times of different algorithms is also carried out.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Post-transcriptional gene silencing by RNA interference is mediated by small interfering RNA called siRNA. This gene silencing mechanism can be exploited therapeutically to a wide variety of disease-associated targets, especially in AIDS, neurodegenerative diseases, cholesterol and cancer on mice with the hope of extending these approaches to treat humans. Over the recent past, a significant amount of work has been undertaken to understand the gene silencing mediated by exogenous siRNA. The design of efficient exogenous siRNA sequences is challenging because of many issues related to siRNA. While designing efficient siRNA, target mRNAs must be selected such that their corresponding siRNAs are likely to be efficient against that target and unlikely to accidentally silence other transcripts due to sequence similarity. So before doing gene silencing by siRNAs, it is essential to analyze their off-target effects in addition to their inhibition efficiency against a particular target. Hence designing exogenous siRNA with good knock-down efficiency and target specificity is an area of concern to be addressed. Some methods have been developed already by considering both inhibition efficiency and off-target possibility of siRNA against agene. Out of these methods, only a few have achieved good inhibition efficiency, specificity and sensitivity. The main focus of this thesis is to develop computational methods to optimize the efficiency of siRNA in terms of “inhibition capacity and off-target possibility” against target mRNAs with improved efficacy, which may be useful in the area of gene silencing and drug design for tumor development. This study aims to investigate the currently available siRNA prediction approaches and to devise a better computational approach to tackle the problem of siRNA efficacy by inhibition capacity and off-target possibility. The strength and limitations of the available approaches are investigated and taken into consideration for making improved solution. Thus the approaches proposed in this study extend some of the good scoring previous state of the art techniques by incorporating machine learning and statistical approaches and thermodynamic features like whole stacking energy to improve the prediction accuracy, inhibition efficiency, sensitivity and specificity. Here, we propose one Support Vector Machine (SVM) model, and two Artificial Neural Network (ANN) models for siRNA efficiency prediction. In SVM model, the classification property is used to classify whether the siRNA is efficient or inefficient in silencing a target gene. The first ANNmodel, named siRNA Designer, is used for optimizing the inhibition efficiency of siRNA against target genes. The second ANN model, named Optimized siRNA Designer, OpsiD, produces efficient siRNAs with high inhibition efficiency to degrade target genes with improved sensitivity-specificity, and identifies the off-target knockdown possibility of siRNA against non-target genes. The models are trained and tested against a large data set of siRNA sequences. The validations are conducted using Pearson Correlation Coefficient, Mathews Correlation Coefficient, Receiver Operating Characteristic analysis, Accuracy of prediction, Sensitivity and Specificity. It is found that the approach, OpsiD, is capable of predicting the inhibition capacity of siRNA against a target mRNA with improved results over the state of the art techniques. Also we are able to understand the influence of whole stacking energy on efficiency of siRNA. The model is further improved by including the ability to identify the “off-target possibility” of predicted siRNA on non-target genes. Thus the proposed model, OpsiD, can predict optimized siRNA by considering both “inhibition efficiency on target genes and off-target possibility on non-target genes”, with improved inhibition efficiency, specificity and sensitivity. Since we have taken efforts to optimize the siRNA efficacy in terms of “inhibition efficiency and offtarget possibility”, we hope that the risk of “off-target effect” while doing gene silencing in various bioinformatics fields can be overcome to a great extent. These findings may provide new insights into cancer diagnosis, prognosis and therapy by gene silencing. The approach may be found useful for designing exogenous siRNA for therapeutic applications and gene silencing techniques in different areas of bioinformatics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Der SPNV als Bestandteil des ÖPNV bildet einen integralen Bestandteil der öffentlichen Daseinsvorsorge. Insbesondere Flächenregionen abseits urbaner Ballungszentren erhalten durch den SPNV sowohl ökonomisch als auch soziokulturell wichtige Impulse, so dass die Zukunftsfähigkeit dieser Verkehrsart durch geeignete Gestaltungsmaßnahmen zu sichern ist. ZIELE: Die Arbeit verfolgte das Ziel, derartige Gestaltungsmaßnahmen sowohl grundlagentheoretisch herzuleiten als auch in ihrer konkreten Ausformung für die verkehrswirtschaftliche Praxis zu beschreiben. Abgezielt wurde insofern auf strukturelle Konzepte als auch praktische Einzelmaßnahmen. Der Schwerpunkt der Analyse erstreckte sich dabei auf Deutschland, wobei jedoch auch verkehrsbezogene Privatisierungserfahrungen aus anderen europäischen Staaten und den USA berücksichtigt wurden. METHODEN: Ausgewertet wurden deutschsprachige als auch internationale Literatur primär verkehrswissenschaftlicher Ausrichtung sowie Fallbeispiele verkehrswirtschaftlicher Privatisierung. Darüber hinaus wurden Entscheidungsträger der Deutschen Bahn (DB) und DB-externe Eisenbahnexperten interviewt. Eine Gruppe 5 DB-interner und 5 DB-externer Probanden nahm zusätzlich an einer standardisierten Erhebung zur Einschätzung struktureller und spezifischer Gestaltungsmaßnahmen für den SPNV teil. ERGEBNISSE: In struktureller Hinsicht ist die Eigentums- und Verfügungsregelung für das gesamte deutsche Bahnwesen und den SPNV kritisch zu bewerten, da der dominante Eisenbahninfrastrukturbetreiber (EIU) in Form der DB Netz AG und die das Netz nutzenden Eisenbahnverkehrs-Unternehmen (EVUs, nach wie vor zumeist DB-Bahnen) innerhalb der DB-Holding konfundiert sind. Hieraus ergeben sich Diskriminierungspotenziale vor allem gegenüber DB-externen EVUs. Diese Situation entspricht keiner echten Netz-Betriebs-Trennung, die wettbewerbstheoretisch sinnvoll wäre und nachhaltige Konkurrenz verschiedener EVUs ermöglichen würde. Die seitens der DB zur Festigung bestehender Strukturen vertretene Argumentation, wonach Netz und Betrieb eine untrennbare Einheit (Synergie) bilden sollten, ist weder wettbewerbstheoretisch noch auf der Ebene technischer Aspekte akzeptabel. Vielmehr werden durch die gegenwärtige Verquickung der als Quasimonopol fungierenden Netzebene mit der Ebene der EVU-Leistungen Innovationspotenziale eingeschränkt. Abgesehen von der grundsätzlichen Notwendigkeit einer konsequenten Netz-Betriebs-Trennung und dezentraler Strukturen sind Ausschreibungen (faktisch öffentliches Verfahren) für den Betrieb der SPNV-Strecken als Handlungsansatz zu berücksichtigen. Wettbewerb kann auf diese Weise gleichsam an der Quelle einer EVU-Leistung ansetzen, wobei politische und administrative Widerstände gegen dieses Konzept derzeit noch unverkennbar sind. Hinsichtlich infrastruktureller Maßnahmen für den SPNV ist insbesondere das sog. "Betreibermodell" sinnvoll, bei dem sich das übernehmende EIU im Sinne seiner Kernkompetenzen auf den Betrieb konzentriert. Die Verantwortung für bauliche Maßnahmen sowie die Instandhaltung der Strecken liegt beim Betreiber, welcher derartige Leistungen am Markt einkaufen kann (Kostensenkungspotenzial). Bei Abgabeplanungen der DB Netz AG für eine Strecke ist mithin für die Auswahl eines Betreibers auf den genannten Ausschreibungsmodus zurückzugreifen. Als kostensenkende Einzelmaßnahmen zur Zukunftssicherung des SPNV werden abschließend insbesondere die Optimierung des Fahrzeugumlaufes sowie der Einsatz von Triebwagen anstatt lokbespannter Züge und die weiter forcierte Ausrichtung auf die kundenorientierte Attraktivitätssteigerung des SPNV empfohlen. SCHLUSSFOLGERUNGEN: Handlungsansätze für eine langfristige Sicherung des SPNV können nicht aus der Realisierung von Extrempositionen (staatlicher Interventionismus versus Liberalismus) resultieren, sondern nur aus einem pragmatischen Ausgleich zwischen beiden Polen. Dabei erscheint eine Verschiebung hin zum marktwirtschaftlichen Pol sinnvoll, um durch die Nutzung wettbewerbsbezogener Impulse Kostensenkungen und Effizienzsteigerungen für den SPNV herbeizuführen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The utilization and management of arbuscular mycorrhiza (AM) symbiosis may improve production and sustainability of the cropping system. For this purpose, native AM fungi (AMF) were sought and tested for their efficiency to increase plant growth by enhanced P uptake and by alleviation of drought stress. Pot experiments with safflower (Carthamus tinctorius) and pea (Pisum sativum) in five soils (mostly sandy loamy Luvisols) and field experiments with peas were carried out during three years at four different sites. Host plants were grown in heated soils inoculated with AMF or the respective heat sterilized inoculum. In the case of peas, mutants resistant to AMF colonization were used as non-mycorrhizal controls. The mycorrhizal impact on yields and its components, transpiration, and P and N uptake was studied in several experiments, partly under varying P and N levels and water supply. Screening of native AMF by most probable number bioassays was not very meaningful. Soil monoliths were placed in the open to simulate field conditions. Inoculation with a native AMF mix improved grain yield, shoot and leaf growth variables as compared to control. Exposed to drought, higher soil water depletion of mycorrhizal plants resulted in a haying-off effect. The growth response to this inoculum could not be significantly reproduced in a subsequent open air pot experiment at two levels of irrigation and P fertilization, however, safflower grew better at higher P and water supply by multiples. The water use efficiency concerning biomass was improved by the AMF inoculum in the two experiments. Transpiration rates were not significantly affected by AM but as a tendency were higher in non-mycorrhizal safflower. A fundamental methodological problem in mycorrhiza field research is providing an appropriate (negative) control for the experimental factor arbuscular mycorrhiza. Soil sterilization or fungicide treatment have undesirable side effects in field and greenhouse settings. Furthermore, artificial rooting, temperature and light conditions in pot experiments may interfere with the interpretation of mycorrhiza effects. Therefore, the myc- pea mutant P2 was tested as a non-mycorrhizal control in a bioassay to evaluate AMF under field conditions in comparison to the symbiotic isogenetic wild type of var. FRISSON as a new integrative approach. However, mutant P2 is also of nod- phenotype and therefore unable to fix N2. A 3-factorial experiment was carried out in a climate chamber at high NPK fertilization to examine the two isolines under non-symbiotic and symbiotic conditions. P2 achieved the same (or higher) biomass as wild type both under good and poor water supply. However, inoculation with the AMF Glomus manihot did not improve plant growth. Differences of grain and straw yields in field trials were large (up to 80 per cent) between those isogenetic pea lines mainly due to higher P uptake under P and water limited conditions. The lacking N2 fixation in mutants was compensated for by high mineral N supply as indicated by the high N status of the pea mutant plants. This finding was corroborated by the results of a major field experiment at three sites with two levels of N fertilization. The higher N rate did not affect grain or straw yields of the non-fixing mutants. Very efficient AMF were detected in a Ferric Luvisol on pasture land as revealed by yield levels of the evaluation crop and by functional vital staining of highly colonized roots. Generally, levels of grain yield were low, at between 40 and 980 kg ha-1. An additional pot trial was carried out to elucidate the strong mycorrhizal effect in the Ferric Luvisol. A triplication of the plant equivalent field P fertilization was necessary to compensate for the mycorrhizal benefit which was with five times higher grain yield very similar to that found in the field experiment. However, the yield differences between the two isolines were not always plausible as the evaluation variable because they were also found in (small) field test trials with apparently sufficient P and N supply and in a soil of almost no AMF potential. This similarly occurred for pea lines of var. SPARKLE and its non-fixing mycorrhizal (E135) and non-symbiotic (R25) isomutants, which were tested in order to exclude experimentally undesirable benefits by N2 fixation. In contrast to var. FRISSON, SPARKLE was not a suitable variety for Mediterranean field conditions. This raises suspicion putative genetic defects other than symbiotic ones may be effective under field conditions, which would conflict with the concept of an appropriate control. It was concluded that AMF resistant plants may help to overcome fundamental problems of present research on arbuscular mycorrhiza, but may create new ones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fully relativistic four-component Dirac-Fock-Slater program for diatomics, with numerically given AO's as basis functions is presented. We discuss the problem of the errors due to the finite basis-set, and due to the influence of the negative energy solutions of the Dirac Hamiltonian. The negative continuum contributions are found to be very small.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rapid growth in high data rate communication systems has introduced new high spectral efficient modulation techniques and standards such as LTE-A (long term evolution-advanced) for 4G (4th generation) systems. These techniques have provided a broader bandwidth but introduced high peak-to-average power ratio (PAR) problem at the high power amplifier (HPA) level of the communication system base transceiver station (BTS). To avoid spectral spreading due to high PAR, stringent requirement on linearity is needed which brings the HPA to operate at large back-off power at the expense of power efficiency. Consequently, high power devices are fundamental in HPAs for high linearity and efficiency. Recent development in wide bandgap power devices, in particular AlGaN/GaN HEMT, has offered higher power level with superior linearity-efficiency trade-off in microwaves communication. For cost-effective HPA design to production cycle, rigorous computer aided design (CAD) AlGaN/GaN HEMT models are essential to reflect real response with increasing power level and channel temperature. Therefore, large-size AlGaN/GaN HEMT large-signal electrothermal modeling procedure is proposed. The HEMT structure analysis, characterization, data processing, model extraction and model implementation phases have been covered in this thesis including trapping and self-heating dispersion accounting for nonlinear drain current collapse. The small-signal model is extracted using the 22-element modeling procedure developed in our department. The intrinsic large-signal model is deeply investigated in conjunction with linearity prediction. The accuracy of the nonlinear drain current has been enhanced through several issues such as trapping and self-heating characterization. Also, the HEMT structure thermal profile has been investigated and corresponding thermal resistance has been extracted through thermal simulation and chuck-controlled temperature pulsed I(V) and static DC measurements. Higher-order equivalent thermal model is extracted and implemented in the HEMT large-signal model to accurately estimate instantaneous channel temperature. Moreover, trapping and self-heating transients has been characterized through transient measurements. The obtained time constants are represented by equivalent sub-circuits and integrated in the nonlinear drain current implementation to account for complex communication signals dynamic prediction. The obtained verification of this table-based large-size large-signal electrothermal model implementation has illustrated high accuracy in terms of output power, gain, efficiency and nonlinearity prediction with respect to standard large-signal test signals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-speed semiconductor lasers are an integral part in the implemen- tation of high-bit-rate optical communications systems. They are com- pact, rugged, reliable, long-lived, and relatively inexpensive sources of coherent light. Due to the very low attenuation window that exists in the silica based optical fiber at 1.55 μm and the zero dispersion point at 1.3 μm, they have become the mainstay of optical fiber com- munication systems. For the fabrication of lasers with gratings such as, distributed bragg reflector or distributed feedback lasers, etching is the most critical step. Etching defines the lateral dimmensions of the structure which determines the performance of optoelectronic devices. In this thesis studies and experiments were carried out about the exist- ing etching processes for InP and a novel dry etching process was de- veloped. The newly developed process was based on Cl2/CH4/H2/Ar chemistry and resulted in very smooth surfaces and vertical side walls. With this process the grating definition was significantly improved as compared to other technological developments in the respective field. A surface defined grating definition approach is used in this thesis work which does not require any re-growth steps and makes the whole fabrication process simpler and cost effective. Moreover, this grating fabrication process is fully compatible with nano-imprint lithography and can be used for high throughput low-cost manufacturing. With usual etching techniques reported before it is not possible to etch very deep because of aspect ratio dependent etching phenomenon where with increasing etch depth the etch rate slows down resulting in non-vertical side walls and footing effects. Although with our de- veloped process quite vertical side walls were achieved but footing was still a problem. To overcome the challenges related to grating defini- tion and deep etching, a completely new three step gas chopping dry etching process was developed. This was the very first time that a time multiplexed etching process for an InP based material system was demonstrated. The developed gas chopping process showed extra ordinary results including high mask selectivity of 15, moderate etch- ing rate, very vertical side walls and a record high aspect ratio of 41. Both the developed etching processes are completely compatible with nano imprint lithography and can be used for low-cost high-throughput fabrication. A large number of broad area laser, ridge waveguide laser, distributed feedback laser, distributed bragg reflector laser and coupled cavity in- jection grating lasers were fabricated using the developed one step etch- ing process. Very extensive characterization was done to optimize all the important design and fabrication parameters. The devices devel- oped have shown excellent performance with a very high side mode suppression ratio of more than 52 dB, an output power of 17 mW per facet, high efficiency of 0.15 W/A, stable operation over temperature and injected currents and a threshold current as low as 30 mA for almost 1 mm long device. A record high modulation bandwidth of 15 GHz with electron-photon resonance and open eye diagrams for 10 Gbps data transmission were also shown.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigated the relationship between higher education and the requirement of the world of work with an emphasis on the effect of problem-based learning (PBL) on graduates' competencies. The implementation of full PBL method is costly (Albanese & Mitchell, 1993; Berkson, 1993; Finucane, Shannon, & McGrath, 2009). However, the implementation of PBL in a less than curriculum-wide mode is more achievable in a broader context (Albanese, 2000). This means higher education institutions implement only a few PBL components in the curriculum. Or a teacher implements a few PBL components at the courses level. For this kind of implementation there is a need to identify PBL components and their effects on particular educational outputs (Hmelo-Silver, 2004; Newman, 2003). So far, however there has been little research about this topic. The main aims of this study were: (1) to identify each of PBL components which were manifested in the development of a valid and reliable PBL implementation questionnaire and (2) to determine the effect of each identified PBL component to specific graduates' competencies. The analysis was based on quantitative data collected in the survey of medicine graduates of Gadjah Mada University, Indonesia. A total of 225 graduates responded to the survey. The result of confirmatory factor analysis (CFA) showed that all individual constructs of PBL and graduates' competencies had acceptable GOFs (Goodness-of-fit). Additionally, the values of the factor loadings (standardize loading estimates), the AVEs (average variance extracted), CRs (construct reliability), and ASVs (average shared squared variance) showed the proof of convergent and discriminant validity. All values indicated valid and reliable measurements. The investigation of the effects of PBL showed that each PBL component had specific effects on graduates' competencies. Interpersonal competencies were affected by Student-centred learning (β = .137; p < .05) and Small group components (β = .078; p < .05). Problem as stimulus affected Leadership (β = .182; p < .01). Real-world problems affected Personal and organisational competencies (β = .140; p < .01) and Interpersonal competencies (β = .114; p < .05). Teacher as facilitator affected Leadership (β = 142; p < .05). Self-directed learning affected Field-related competencies (β = .080; p < .05). These results can help higher education institution and educator to have informed choice about the implementation of PBL components. With this information higher education institutions and educators could fulfil their educational goals and in the same time meet their limited resources. This study seeks to improve prior studies' research method in four major ways: (1) by indentifying PBL components based on theory and empirical data; (2) by using latent variables in the structural equation modelling instead of using a variable as a proxy of a construct; (3) by using CFA to validate the latent structure of the measurement, thus providing better evidence of validity; and (4) by using graduate survey data which is suitable for analysing PBL effects in the frame work of the relationship between higher education and the world of work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of compositional data analysis through log ratio trans- formations corresponds to a multinomial logit model for the shares themselves. This model is characterized by the property of Independence of Irrelevant Alter- natives (IIA). IIA states that the odds ratio in this case the ratio of shares is invariant to the addition or deletion of outcomes to the problem. It is exactly this invariance of the ratio that underlies the commonly used zero replacement procedure in compositional data analysis. In this paper we investigate using the nested logit model that does not embody IIA and an associated zero replacement procedure and compare its performance with that of the more usual approach of using the multinomial logit model. Our comparisons exploit a data set that com- bines voting data by electoral division with corresponding census data for each division for the 2001 Federal election in Australia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is almost not a case in exploration geology, where the studied data doesn’t includes below detection limits and/or zero values, and since most of the geological data responds to lognormal distributions, these “zero data” represent a mathematical challenge for the interpretation. We need to start by recognizing that there are zero values in geology. For example the amount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-exists with nepheline. Another common essential zero is a North azimuth, however we can always change that zero for the value of 360°. These are known as “Essential zeros”, but what can we do with “Rounded zeros” that are the result of below the detection limit of the equipment? Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimes we need to differentiate between a sodic and a potassic alteration. Pre-classification into groups requires a good knowledge of the distribution of the data and the geochemical characteristics of the groups which is not always available. Considering the zero values equal to the limit of detection of the used equipment will generate spurious distributions, especially in ternary diagrams. Same situation will occur if we replace the zero values by a small amount using non-parametric or parametric techniques (imputation). The method that we are proposing takes into consideration the well known relationships between some elements. For example, in copper porphyry deposits, there is always a good direct correlation between the copper values and the molybdenum ones, but while copper will always be above the limit of detection, many of the molybdenum values will be “rounded zeros”. So, we will take the lower quartile of the real molybdenum values and establish a regression equation with copper, and then we will estimate the “rounded” zero values of molybdenum by their corresponding copper values. The method could be applied to any type of data, provided we establish first their correlation dependency. One of the main advantages of this method is that we do not obtain a fixed value for the “rounded zeros”, but one that depends on the value of the other variable. Key words: compositional data analysis, treatment of zeros, essential zeros, rounded zeros, correlation dependency