900 resultados para growth-survival trade-off
Resumo:
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
Resumo:
Cette thèse est organisée en trois chapitres. Les deux premiers s'intéressent à l'évaluation, par des méthodes d'estimations, de l'effet causal ou de l'effet d'un traitement, dans un environnement riche en données. Le dernier chapitre se rapporte à l'économie de l'éducation. Plus précisément dans ce chapitre j'évalue l'effet de la spécialisation au secondaire sur le choix de filière à l'université et la performance. Dans le premier chapitre, j'étudie l'estimation efficace d'un paramètre de dimension finie dans un modèle linéaire où le nombre d'instruments peut être très grand ou infini. L'utilisation d'un grand nombre de conditions de moments améliore l'efficacité asymptotique des estimateurs par variables instrumentales, mais accroit le biais. Je propose une version régularisée de l'estimateur LIML basée sur trois méthodes de régularisations différentes, Tikhonov, Landweber Fridman, et composantes principales, qui réduisent le biais. Le deuxième chapitre étend les travaux précédents, en permettant la présence d'un grand nombre d'instruments faibles. Le problème des instruments faibles est la consequence d'un très faible paramètre de concentration. Afin d'augmenter la taille du paramètre de concentration, je propose d'augmenter le nombre d'instruments. Je montre par la suite que les estimateurs 2SLS et LIML régularisés sont convergents et asymptotiquement normaux. Le troisième chapitre de cette thèse analyse l'effet de la spécialisation au secondaire sur le choix de filière à l'université. En utilisant des données américaines, j'évalue la relation entre la performance à l'université et les différents types de cours suivis pendant les études secondaires. Les résultats suggèrent que les étudiants choisissent les filières dans lesquelles ils ont acquis plus de compétences au secondaire. Cependant, on a une relation en U entre la diversification et la performance à l'université, suggérant une tension entre la spécialisation et la diversification. Le compromis sous-jacent est évalué par l'estimation d'un modèle structurel de l'acquisition du capital humain au secondaire et de choix de filière. Des analyses contrefactuelles impliquent qu'un cours de plus en matière quantitative augmente les inscriptions dans les filières scientifiques et technologiques de 4 points de pourcentage.
Resumo:
Il y a des problemes qui semblent impossible a resoudre sans l'utilisation d'un tiers parti honnete. Comment est-ce que deux millionnaires peuvent savoir qui est le plus riche sans dire a l'autre la valeur de ses biens ? Que peut-on faire pour prevenir les collisions de satellites quand les trajectoires sont secretes ? Comment est-ce que les chercheurs peuvent apprendre les liens entre des medicaments et des maladies sans compromettre les droits prives du patient ? Comment est-ce qu'une organisation peut ecmpecher le gouvernement d'abuser de l'information dont il dispose en sachant que l'organisation doit n'avoir aucun acces a cette information ? Le Calcul multiparti, une branche de la cryptographie, etudie comment creer des protocoles pour realiser de telles taches sans l'utilisation d'un tiers parti honnete. Les protocoles doivent etre prives, corrects, efficaces et robustes. Un protocole est prive si un adversaire n'apprend rien de plus que ce que lui donnerait un tiers parti honnete. Un protocole est correct si un joueur honnete recoit ce que lui donnerait un tiers parti honnete. Un protocole devrait bien sur etre efficace. Etre robuste correspond au fait qu'un protocole marche meme si un petit ensemble des joueurs triche. On demontre que sous l'hypothese d'un canal de diusion simultane on peut echanger la robustesse pour la validite et le fait d'etre prive contre certains ensembles d'adversaires. Le calcul multiparti a quatre outils de base : le transfert inconscient, la mise en gage, le partage de secret et le brouillage de circuit. Les protocoles du calcul multiparti peuvent etre construits avec uniquements ces outils. On peut aussi construire les protocoles a partir d'hypoth eses calculatoires. Les protocoles construits a partir de ces outils sont souples et peuvent resister aux changements technologiques et a des ameliorations algorithmiques. Nous nous demandons si l'efficacite necessite des hypotheses de calcul. Nous demontrons que ce n'est pas le cas en construisant des protocoles efficaces a partir de ces outils de base. Cette these est constitue de quatre articles rediges en collaboration avec d'autres chercheurs. Ceci constitue la partie mature de ma recherche et sont mes contributions principales au cours de cette periode de temps. Dans le premier ouvrage presente dans cette these, nous etudions la capacite de mise en gage des canaux bruites. Nous demontrons tout d'abord une limite inferieure stricte qui implique que contrairement au transfert inconscient, il n'existe aucun protocole de taux constant pour les mises en gage de bit. Nous demontrons ensuite que, en limitant la facon dont les engagements peuvent etre ouverts, nous pouvons faire mieux et meme un taux constant dans certains cas. Ceci est fait en exploitant la notion de cover-free families . Dans le second article, nous demontrons que pour certains problemes, il existe un echange entre robustesse, la validite et le prive. Il s'effectue en utilisant le partage de secret veriable, une preuve a divulgation nulle, le concept de fantomes et une technique que nous appelons les balles et les bacs. Dans notre troisieme contribution, nous demontrons qu'un grand nombre de protocoles dans la litterature basee sur des hypotheses de calcul peuvent etre instancies a partir d'une primitive appelee Transfert Inconscient Veriable, via le concept de Transfert Inconscient Generalise. Le protocole utilise le partage de secret comme outils de base. Dans la derniere publication, nous counstruisons un protocole efficace avec un nombre constant de rondes pour le calcul a deux parties. L'efficacite du protocole derive du fait qu'on remplace le coeur d'un protocole standard par une primitive qui fonctionne plus ou moins bien mais qui est tres peu couteux. On protege le protocole contre les defauts en utilisant le concept de privacy amplication .
Resumo:
This paper explores situations where tenants in public houses, in a specific neighborhood, are given the legislated right to buy the houses they live in or can choose to remain in their houses and pay the regulated rent. This type of legislation has been passed in many European countries in the last 30-35 years (the U.K. Housing Act 1980 is a leading example). The main objective with this type of legislation is to transfer the ownership of the houses from the public authority to the tenants. To achieve this goal, selling prices of the public houses are typically heavily subsidized. The legislating body then faces a trade-off between achieving the goals of the legislation and allocating the houses efficiently. This paper investigates this specific trade-off and identifies an allocation rule that is individually rational, equilibrium selecting, and group non-manipulable in a restricted preference domain that contains “almost all” preference profiles. In this restricted domain, the identified rule is the equilibrium selecting rule that transfers the maximum number of ownerships from the public authority to the tenants. This rule is preferred to the current U.K. system by both the existing tenants and the public authority. Finally, a dynamic process for finding the outcome of the identified rule, in a finite number of steps, is provided.
Resumo:
La prise de décision est un processus computationnel fondamental dans de nombreux aspects du comportement animal. Le modèle le plus souvent rencontré dans les études portant sur la prise de décision est appelé modèle de diffusion. Depuis longtemps, il explique une grande variété de données comportementales et neurophysiologiques dans ce domaine. Cependant, un autre modèle, le modèle d’urgence, explique tout aussi bien ces mêmes données et ce de façon parcimonieuse et davantage encrée sur la théorie. Dans ce travail, nous aborderons tout d’abord les origines et le développement du modèle de diffusion et nous verrons comment il a été établi en tant que cadre de travail pour l’interprétation de la plupart des données expérimentales liées à la prise de décision. Ce faisant, nous relèveront ses points forts afin de le comparer ensuite de manière objective et rigoureuse à des modèles alternatifs. Nous réexaminerons un nombre d’assomptions implicites et explicites faites par ce modèle et nous mettrons alors l’accent sur certains de ses défauts. Cette analyse servira de cadre à notre introduction et notre discussion du modèle d’urgence. Enfin, nous présenterons une expérience dont la méthodologie permet de dissocier les deux modèles, et dont les résultats illustrent les limites empiriques et théoriques du modèle de diffusion et démontrent en revanche clairement la validité du modèle d'urgence. Nous terminerons en discutant l'apport potentiel du modèle d'urgence pour l'étude de certaines pathologies cérébrales, en mettant l'accent sur de nouvelles perspectives de recherche.
Resumo:
The investigation was aimed at establishing the effect of salinity on the culture performance of Peneus Indicus in pokkali fields and also to find out the growth performance of the shrimp at varying salinities. The experiments were laid out at Rice Research Station, Vyttila of Kerala Agriculture University in three fields of area 1000 m2 each. The results of the experiment clearly establish that shrimps when stocked at higher salinity (20-25 ppt) for 45 days has given higher growth, survival and production than those stocked at lower salinity (10-15 ppt) in all the above parameters even when the culture experiment was maintained for longer periods in lower salinity. In the prolonged culture experiments conducted for 120 days in 10-25 ppt salinity, the results were poorer than the short period culture in higher salinity and the production values similar to lower saline culture. This clearly establishes the importance of salinity as an ecological factor which will have profound influence in shrimp farming operations.
Resumo:
present work deals with the various aspects of population characteristics of penaeus indicus ,Metapenaeus dobsoni and metapenaeus monoceros during their nursery phase in tidal ponds and adjacent backwaters.Importance of the present study is to suggest scientific basis for the management of penaeid resources in tidal ponds and backwaters based on their biological characteristics to ensure better yield.Seasonal closure of fishing will be effective in improving the size of the shrimp at harvest.Hydrology of tidal ponds varied with location, but showed a common seasonal pattem.Seasonal variation in temperature was very small. It fluctuated between 27.5 to 32.3°C in tidalponds and 26.9 to 29.9°C in open backwaters.Improvement of nursery habitats with due consideration for biological requirements of the resource will ensure better growth, survival and abundance of the stock.The recruitment, growth and emigration data of prawns from their nurseries can be used successfully for fishery forecasting. projecting juvenile growth forward through time, it is possible to establish, which cohort contributes to offshore fishery each year. So, by interpreting the recruitment and growth data of species in their nurseries with offshore catch data, fishery can be forecasted successfully.
Resumo:
The Indian edible oyster Crassostrea madrasensis (Preston) is known to be a highly suitable candidate species for culture. Though Q, madrasensis has been subjected to intensive research, there has been no significant attempt to culture this oyster commercially. One major reason for the lack of interest in oyster culture could be the disparity in growth, survival and production reported by earlier workersf from different regions along the Indian coast. Greater predictability of production can create confidence and encourage entrepreneurs interested in oyster culture. The present study, which is a detailed investigation on the influence of various environmental variables on growth and reproduction of Q, madrasensis, is not confined to the impact of only hydrological parameters but is also extended to study the effect of different degrees of aerial exposure on growth and survival. The main objective of the study is to develop a background for subsequent development of a site suitability index for culture of Q, madrasensis along the Indian coast. Two sets of experiments were conducted during the present study. Details of the experiments are presented in the thesis under two major chapters comprising four sections each. Each chapter has a separate introduction, materials and methods, results and discussion. .
Resumo:
The aim of the present investigation is to build up the knowledge on the role of commensal bacteria present on the prawns during storage at various temperatures. The study Evaluates the nature of spoilage of prawns during storage at three different temperatures (28:2OC, 4°C and -18°C) by organoleptic assessment, accumulation of trim ethylamine, ammonia content, changes in the flesh pH and total heterotrophic bacterial population at various time intervals and to find out the changes in the proximate composition (protein, carbohydrate, lipid, ash and moisture) of the prawns during storage at various temperatures by estimating the contents at different time intervals along with spoilage assessment. The researcher studies the occurrence and role of various bacterial genera which form the component of spoilage flora during storage and determines the distribution of various hydrolytic enzyme producing bacteria by evaluating their ability to produce enzymes such as caseinase, gelatinase, amylase, lipase and urease. to assess the spoilage potential of the bacteria by testing their ability to reduce trimethylamine oxide (TMAO) to trimethylamine (TMA) and to produce odour in flesh broth and halos in flesh agar media.The researcher also gives stress on the growth kinetics of selected potential spoilers by growing_them in different media and to assess the effect of sodium chloride concentrations, temperature and pH on their growth, survival and. generation time.
Resumo:
Analysis by reduction is a method used in linguistics for checking the correctness of sentences of natural languages. This method is modelled by restarting automata. Here we study a new type of restarting automaton, the so-called t-sRL-automaton, which is an RL-automaton that is rather restricted in that it has a window of size 1 only, and that it works under a minimal acceptance condition. On the other hand, it is allowed to perform up to t rewrite (that is, delete) steps per cycle. We focus on the descriptional complexity of these automata, establishing two complexity measures that are both based on the description of t-sRL-automata in terms of so-called meta-instructions. We present some hierarchy results as well as a non-recursive trade-off between deterministic 2-sRL-automata and finite-state acceptors.
Resumo:
In the vision of Mark Weiser on ubiquitous computing, computers are disappearing from the focus of the users and are seamlessly interacting with other computers and users in order to provide information and services. This shift of computers away from direct computer interaction requires another way of applications to interact without bothering the user. Context is the information which can be used to characterize the situation of persons, locations, or other objects relevant for the applications. Context-aware applications are capable of monitoring and exploiting knowledge about external operating conditions. These applications can adapt their behaviour based on the retrieved information and thus to replace (at least a certain amount) the missing user interactions. Context awareness can be assumed to be an important ingredient for applications in ubiquitous computing environments. However, context management in ubiquitous computing environments must reflect the specific characteristics of these environments, for example distribution, mobility, resource-constrained devices, and heterogeneity of context sources. Modern mobile devices are equipped with fast processors, sufficient memory, and with several sensors, like Global Positioning System (GPS) sensor, light sensor, or accelerometer. Since many applications in ubiquitous computing environments can exploit context information for enhancing their service to the user, these devices are highly useful for context-aware applications in ubiquitous computing environments. Additionally, context reasoners and external context providers can be incorporated. It is possible that several context sensors, reasoners and context providers offer the same type of information. However, the information providers can differ in quality levels (e.g. accuracy), representations (e.g. position represented in coordinates and as an address) of the offered information, and costs (like battery consumption) for providing the information. In order to simplify the development of context-aware applications, the developers should be able to transparently access context information without bothering with underlying context accessing techniques and distribution aspects. They should rather be able to express which kind of information they require, which quality criteria this information should fulfil, and how much the provision of this information should cost (not only monetary cost but also energy or performance usage). For this purpose, application developers as well as developers of context providers need a common language and vocabulary to specify which information they require respectively they provide. These descriptions respectively criteria have to be matched. For a matching of these descriptions, it is likely that a transformation of the provided information is needed to fulfil the criteria of the context-aware application. As it is possible that more than one provider fulfils the criteria, a selection process is required. In this process the system has to trade off the provided quality of context and required costs of the context provider against the quality of context requested by the context consumer. This selection allows to turn on context sources only if required. Explicitly selecting context services and thereby dynamically activating and deactivating the local context provider has the advantage that also the resource consumption is reduced as especially unused context sensors are deactivated. One promising solution is a middleware providing appropriate support in consideration of the principles of service-oriented computing like loose coupling, abstraction, reusability, or discoverability of context providers. This allows us to abstract context sensors, context reasoners and also external context providers as context services. In this thesis we present our solution consisting of a context model and ontology, a context offer and query language, a comprehensive matching and mediation process and a selection service. Especially the matching and mediation process and the selection service differ from the existing works. The matching and mediation process allows an autonomous establishment of mediation processes in order to transfer information from an offered representation into a requested representation. In difference to other approaches, the selection service selects not only a service for a service request, it rather selects a set of services in order to fulfil all requests which also facilitates the sharing of services. The approach is extensively reviewed regarding the different requirements and a set of demonstrators shows its usability in real-world scenarios.
Resumo:
The capability of estimating the walking direction of people would be useful in many applications such as those involving autonomous cars and robots. We introduce an approach for estimating the walking direction of people from images, based on learning the correct classification of a still image by using SVMs. We find that the performance of the system can be improved by classifying each image of a walking sequence and combining the outputs of the classifier. Experiments were performed to evaluate our system and estimate the trade-off between number of images in walking sequences and performance.
Resumo:
Intrinsic resistance to the epidermal growth factor receptor (EGFR; HER1) tyrosine kinase inhibitor (TKI) gefitinib, and more generally to EGFR TKIs, is a common phenomenon in breast cancer. The availability of molecular criteria for predicting sensitivity to EGFR-TKIs is, therefore, the most relevant issue for their correct use and for planning future research. Though it appears that in non-small-cell lung cancer (NSCLC) response to gefitinib is directly related to the occurrence of specific mutations in the EGFR TK domain, breast cancer patients cannot be selected for treatment with gefitinib on the same basis as such EGFR mutations have been reported neither in primary breast carcinomas nor in several breast cancer cell lines. Alternatively, there is a general agreement on the hypothesis that the occurrence of molecular alterations that activate transduction pathways downstream of EGFR (i.e., MEK1/MEK2 - ERK1/2 MAPK and PI-3'K - AKT growth/survival signaling cascades) significantly affect the response to EGFR TKIs in breast carcinomas. However, there are no studies so far addressing a role of EGF-related ligands as intrinsic breast cancer cell modulators of EGFR TKI efficacy. We recently monitored gene expression profiles and sub-cellular localization of HER-1/-2/-3/-4 related ligands (i.e., EGF, amphiregulin, transforming growth factor-α, ß-cellulin, epiregulin and neuregulins) prior to and after gefitinib treatment in a panel of human breast cancer cell lines. First, gefitinibinduced changes in the endogenous levels of EGF-related ligands correlated with the natural degree of breast cancer cell sensitivity to gefitinib. While breast cancer cells intrinsically resistant to gefitinib (IC50 ≥15 μM) markedly up-regulated (up to 600 times) the expression of genes codifying for HERspecific ligands, a significant down-regulation (up to 106 times) of HER ligand gene transcription was found in breast cancer cells intrinsically sensitive to gefitinib (IC50 ≤1 μM). Second, loss of HER1 function differentially regulated the nuclear trafficking of HER-related ligands. While gefitinib treatment induced an active import and nuclear accumulation of the HER ligand NRG in intrinsically gefitinib-resistant breast cancer cells, an active export and nuclear loss of NRG was observed in intrinsically gefitinib-sensitive breast cancer cells. In summary, through in vitro and pharmacodynamic studies we have learned that, besides mutations in the HER1 gene, oncogenic changes downstream of HER1 are the key players regulating gefitinib efficacy in breast cancer cells. It now appears that pharmacological inhibition of HER1 function also leads to striking changes in both the gene expression and the nucleo-cytoplasmic trafficking of HER-specific ligands, and that this response correlates with the intrinsic degree of breast cancer sensitivity to the EGFR TKI gefitinib. The relevance of this previously unrecognized intracrine feedback to gefitinib warrants further studies as cancer cells could bypass the antiproliferative effects of HER1-targeted therapeutics without a need for the overexpression and/or activation of other HER family members and/or the activation of HER-driven downstream signaling cascades
Resumo:
El interés de esta investigación diagnóstica es evaluar el tema de los refugiados ambientales en las legislaciones internacionales que se encuentran en vigencia actualmente. Es un elemento de análisis pertinente debido a que durante los últimos años el cambio climático y sus efectos adversos han causado estragos en algunas poblaciones, dando origen a lo que se conoce como refugiados ambientales. Así, la falta de inclusión del concepto en la normatividad internacional, representa una problemática, en tanto que estas personas no tienen ningún tipo de apoyo por parte de la Comunidad Internacional. En este trabajo se centra en el caso de las Islas Maldivas y refleja la necesidad de crear un nuevo régimen internacional que cubra a la figura de refugiados ambientales, para así hacer frente a esta problemática internacional.
Resumo:
El objetivo de este documento es evaluar empíricamente el efecto que tiene una mayor o menor exposición del programa Proniño de la Fundación Telefónica en la cantidad de horas trabajadas de los niños, niñas y adolescentes. Tomando información desde el 2010 y hasta el 2012 se evalúa empíricamente el impacto en la duración a la exposición del tratamiento, eliminando el sesgo de selección por duración a través de la metodología de Propensity Score Matching. Los resultados muestran que la exposición al tratamiento sí logra reducir el número de horas trabajadas de los niños, niñas y adolescentes, alcanzando los niveles más altos de reducción cuando el tiempo de exposición al programa es más amplio; en este caso, tres años y particularmente para el grupo de edad comprendido entre los 12 a 14 años. Finalmente se evidencia que el programa es más efectivo en la reducción del número de horas trabajadas a la semana de los niños (hombres) que de las niñas (mujeres).