822 resultados para Lazzard, Gilbert: Actancy. Empirical approaches to language typology
Resumo:
Plants have the ability to use the composition of incident light as a cue to adapt development and growth to their environment. Arabidopsis thaliana as well as many crops are best adapted to sunny habitats. When subjected to shade, these plants exhibit a variety of physiological responses collectively called shade avoidance syndrome (SAS). It includes increased growth of hypocotyl and petioles, decreased growth rate of cotyledons and reduced branching and crop yield. These responses are mainly mediated by phytochrome photoreceptors, which exist either in an active, far-red light (FR) absorbing or an inactive, red light (R) absorbing isoform. In direct sunlight, the R to FR light (R/FR) ratio is high and converts the phytochromes into their physiologically active state. The phytochromes interact with downstream transcription factors such as PHYTOCHROME INTERACTING FACTOR (PIF), which are subsequently degraded. Light filtered through a canopy is strongly depleted in R, which result in a low R/FR ratio and renders the phytochromes inactive. Protein levels of downstream transcription factors are stabilized, which initiates the expression of shade-induced genes such as HFR1, PIL1 or ATHB-2. In my thesis, I investigated transcriptional responses mediated by the SAS in whole Arabidopsis seedlings. Using microarray and chromatin immunoprecipitation data, we identified genome-wide PIF4 and PIF5 dependent shade regulated gene as well as putative direct target genes of PIF5. This revealed evidence for a direct regulatory link between phytochrome signaling and the growth promoting phytohormone auxin (IAA) at the level of biosynthesis, transport and signaling. Subsequently, it was shown, that free-IAA levels are upregulated in response to shade. It is assumed that shade-induced auxin production takes predominantly place in cotyledons of seedlings. This implies, that IAA is subsequently transported basipetally to the hypocotyl and enhances elongation growth. The importance of auxin transport for growth responses has been established by chemical and genetic approaches. To gain a better understanding of spatio-temporal transcriptional regulation of shade-induce auxin, I generated in a second project, an organ specific high throughput data focusing on cotyledon and hypocotyl of young Arabidopsis seedlings. Interestingly, both organs show an opposite growth regulation by shade. I first investigated the spatio-transcriptional regulation of auxin re- sponsive gene, in order to determine how broad gene expression pattern can be explained by the hypothesized movement of auxin from cotyledons to hypocotyls in shade. The analysis suggests, that several genes are indeed regulated according to our prediction and others are regulated in a more complex manner. In addition, analysis of gene families of auxin biosynthetic and transport components, lead to the identification of essential family members for shade-induced growth re- sponses, which were subsequently experimentally confirmed. Finally, the analysis of expression pattern identified several candidate genes, which possibly explain aspects of the opposite growth response of the different organs.
Resumo:
Sudoku problems are some of the most known and enjoyed pastimes, with a never diminishing popularity, but, for the last few years those problems have gone from an entertainment to an interesting research area, a twofold interesting area, in fact. On the one side Sudoku problems, being a variant of Gerechte Designs and Latin Squares, are being actively used for experimental design, as in [8, 44, 39, 9]. On the other hand, Sudoku problems, as simple as they seem, are really hard structured combinatorial search problems, and thanks to their characteristics and behavior, they can be used as benchmark problems for refining and testing solving algorithms and approaches. Also, thanks to their high inner structure, their study can contribute more than studies of random problems to our goal of solving real-world problems and applications and understanding problem characteristics that make them hard to solve. In this work we use two techniques for solving and modeling Sudoku problems, namely, Constraint Satisfaction Problem (CSP) and Satisfiability Problem (SAT) approaches. To this effect we define the Generalized Sudoku Problem (GSP), where regions can be of rectangular shape, problems can be of any order, and solution existence is not guaranteed. With respect to the worst-case complexity, we prove that GSP with block regions of m rows and n columns with m = n is NP-complete. For studying the empirical hardness of GSP, we define a series of instance generators, that differ in the balancing level they guarantee between the constraints of the problem, by finely controlling how the holes are distributed in the cells of the GSP. Experimentally, we show that the more balanced are the constraints, the higher the complexity of solving the GSP instances, and that GSP is harder than the Quasigroup Completion Problem (QCP), a problem generalized by GSP. Finally, we provide a study of the correlation between backbone variables – variables with the same value in all the solutions of an instance– and hardness of GSP.
Resumo:
Tämän tutkielman tavoitteena oli määrittää uuden markkinan valinnan perusteita teolliselle tuotteelle. Tutkielma keskittyi jo tunnettuihin kansainvälisen markkinavalinnan lähestymistapoihin ja pyrki soveltamaan yhtä menetelmää käytäntöön tutkielman empiria osassa case-tutkimuksen avulla. Tutkimusote oli tutkiva, eksploratiivinen ja perustui sekundääri analyysiin. Käytetyt tiedon lähteet olivat suureksi osin sekundäärisiä tuottaen kvalitatiivista tietoa. Kuitenkin haastatteluita suoritettiin myös. Kattava kirjallisuus katsaus tunnetuista teoreettisista lähestymistavoista kansainväliseen markkinavalintaan oli osa tutkielmaa. Kolme tärkeintä lähestymistapaa esiteltiin tarkemmin. Yksi lähestymistavoista, ei-järjestelmällinen, muodosti viitekehyksen tutkielman empiria-osalle. Empiria pyrki soveltamaan yhtä ei-järjestelmällisen lähestymistavan malleista kansainvälisessä paperiteollisuudessa. Tarkoituksena oli tunnistaa kaikkein houkuttelevimmat maat mahdollisille markkinointitoimenpiteille tuotteen yhdellä loppukäyttöalueella. Tutkielmassa päädyttiin käyttämään ilmastollisia olosuhteita, siipikarjan päälukua sekä siipikarjan kasvuprosenttia suodattimina pyrittäessä vähentämään mahdollisten maiden lukumäärää. Tutkielman empiria-osa kärsi selkeästi relevantin tiedon puutteesta. Siten myös tutkielman reliabiliteetti ja validiteetti voidaan jossain määrin kyseenalaistaa.
Resumo:
T cells play a critical role in tumor immune surveillance as evidenced by extensive mouse-tumor model studies as well as encouraging patient responses to adoptive T cell therapies and dendritic cell vaccines. It is well established that the interplay of tumor cells with their local cellular environment can trigger events that are immunoinhibitory to T cells. More recently it is emerging that the tumor vasculature itself constitutes an important barrier to T cells. Endothelial cells lining the vessels can suppress T cell activity, target them for destruction, and block them from gaining entry into the tumor in the first place through the deregulation of adhesion molecules. Here we review approaches to break this tumor endothelial barrier and enhance T cell activity.
Resumo:
This book comprises two volumes and builds on the findings of the DISMEVAL project (Developing and validating DISease Management EVALuation methods for European health care systems), funded under the European Union's (EU) Seventh Framework Programme (FP7) (Agreement no. 223277). DISMEVAL was a three-year European collaborative project conducted between 2009 and 2011. It contributed to developing new research methods and generating the evidence base to inform decision-making in the field of chronic disease management evaluation (www.dismeval.eu). In this book, we report on the findings of the project's first phase, capturing the diverse range of contexts in which new approaches to chronic care are being implemented and evaluating the outcomes of these initiatives using an explicit comparative approach and a unified assessment framework. In this first volume, we describe the range of approaches to chronic care adopted in 12 European countries. By reflecting on the facilitators and barriers to implementation, we aim to provide policy-makers and practitioners with a portfolio of options to advance chronic care approaches in a given policy context.
Resumo:
The peroxisome proliferator-activated receptors (PPARs) are a group of nuclear receptors that function as transcription factors regulating the expression of genes involved in cellular differentiation, development, metabolism and also tumorigenesis. Three PPAR isotypes (α, β/δ and γ) have been identified, among which PPARβ/δ is the most difficult to functionally examine due to its tissue-specific diversity in cell fate determination, energy metabolism and housekeeping activities. PPARβ/δ acts both in a ligand-dependent and -independent manner. The specific type of regulation, activation or repression, is determined by many factors, among which the type of ligand, the presence/absence of PPARβ/δ-interacting corepressor or coactivator complexes and PPARβ/δ protein post-translational modifications play major roles. Recently, new global approaches to the study of nuclear receptors have made it possible to evaluate their molecular activity in a more systemic fashion, rather than deeply digging into a single pathway/function. This systemic approach is ideally suited for studying PPARβ/δ, due to its ubiquitous expression in various organs and its overlapping and tissue-specific transcriptomic signatures. The aim of the present review is to present in detail the diversity of PPARβ/δ function, focusing on the different information gained at the systemic level, and describing the global and unbiased approaches that combine a systems view with molecular understanding.
Resumo:
Markkinasegmentointi nousi esiin ensi kerran jo 50-luvulla ja se on ollut siitä lähtien yksi markkinoinnin peruskäsitteistä. Suuri osa segmentointia käsittelevästä tutkimuksesta on kuitenkin keskittynyt kuluttajamarkkinoiden segmentointiin yritys- ja teollisuusmarkkinoiden segmentoinnin jäädessä vähemmälle huomiolle. Tämän tutkimuksen tavoitteena on luoda segmentointimalli teollismarkkinoille tietotekniikan tuotteiden ja palveluiden tarjoajan näkökulmasta. Tarkoituksena on selvittää mahdollistavatko case-yrityksen nykyiset asiakastietokannat tehokkaan segmentoinnin, selvittää sopivat segmentointikriteerit sekä arvioida tulisiko tietokantoja kehittää ja kuinka niitä tulisi kehittää tehokkaamman segmentoinnin mahdollistamiseksi. Tarkoitus on luoda yksi malli eri liiketoimintayksiköille yhteisesti. Näin ollen eri yksiköiden tavoitteet tulee ottaa huomioon eturistiriitojen välttämiseksi. Tutkimusmetodologia on tapaustutkimus. Lähteinä tutkimuksessa käytettiin sekundäärisiä lähteitä sekä primäärejä lähteitä kuten case-yrityksen omia tietokantoja sekä haastatteluita. Tutkimuksen lähtökohtana oli tutkimusongelma: Voiko tietokantoihin perustuvaa segmentointia käyttää kannattavaan asiakassuhdejohtamiseen PK-yritys sektorilla? Tavoitteena on luoda segmentointimalli, joka hyödyntää tietokannoissa olevia tietoja tinkimättä kuitenkaan tehokkaan ja kannattavan segmentoinnin ehdoista. Teoriaosa tutkii segmentointia yleensä painottuen kuitenkin teolliseen markkinasegmentointiin. Tarkoituksena on luoda selkeä kuva erilaisista lähestymistavoista aiheeseen ja syventää näkemystä tärkeimpien teorioiden osalta. Tietokantojen analysointi osoitti selviä puutteita asiakastiedoissa. Peruskontaktitiedot löytyvät mutta segmentointia varten tietoa on erittäin rajoitetusti. Tietojen saantia jälleenmyyjiltä ja tukkureilta tulisi parantaa loppuasiakastietojen saannin takia. Segmentointi nykyisten tietojen varassa perustuu lähinnä sekundäärisiin tietoihin kuten toimialaan ja yrityskokoon. Näitäkään tietoja ei ole saatavilla kaikkien tietokannassa olevien yritysten kohdalta.
Resumo:
We propose an innovative, integrated, cost-effective health system to combat major non-communicable diseases (NCDs), including cardiovascular, chronic respiratory, metabolic, rheumatologic and neurologic disorders and cancers, which together are the predominant health problem of the 21st century. This proposed holistic strategy involves comprehensive patient-centered integrated care and multi-scale, multi-modal and multi-level systems approaches to tackle NCDs as a common group of diseases. Rather than studying each disease individually, it will take into account their intertwined gene-environment, socio-economic interactions and co-morbidities that lead to individual-specific complex phenotypes. It will implement a road map for predictive, preventive, personalized and participatory (P4) medicine based on a robust and extensive knowledge management infrastructure that contains individual patient information. It will be supported by strategic partnerships involving all stakeholders, including general practitioners associated with patient-centered care. This systems medicine strategy, which will take a holistic approach to disease, is designed to allow the results to be used globally, taking into account the needs and specificities of local economies and health systems.
Resumo:
OBJECTIVE: To evaluate the effectiveness of a complex intervention implementing best practice guidelines recommending clinicians screen and counsel young people across multiple psychosocial risk factors, on clinicians' detection of health risks and patients' risk taking behaviour, compared to a didactic seminar on young people's health. DESIGN: Pragmatic cluster randomised trial where volunteer general practices were stratified by postcode advantage or disadvantage score and billing type (private, free national health, community health centre), then randomised into either intervention or comparison arms using a computer generated random sequence. Three months post-intervention, patients were recruited from all practices post-consultation for a Computer Assisted Telephone Interview and followed up three and 12 months later. Researchers recruiting, consenting and interviewing patients and patients themselves were masked to allocation status; clinicians were not. SETTING: General practices in metropolitan and rural Victoria, Australia. PARTICIPANTS: General practices with at least one interested clinician (general practitioner or nurse) and their 14-24 year old patients. INTERVENTION: This complex intervention was designed using evidence based practice in learning and change in clinician behaviour and general practice systems, and included best practice approaches to motivating change in adolescent risk taking behaviours. The intervention involved training clinicians (nine hours) in health risk screening, use of a screening tool and motivational interviewing; training all practice staff (receptionists and clinicians) in engaging youth; provision of feedback to clinicians of patients' risk data; and two practice visits to support new screening and referral resources. Comparison clinicians received one didactic educational seminar (three hours) on engaging youth and health risk screening. OUTCOME MEASURES: Primary outcomes were patient report of (1) clinician detection of at least one of six health risk behaviours (tobacco, alcohol and illicit drug use, risks for sexually transmitted infection, STI, unplanned pregnancy, and road risks); and (2) change in one or more of the six health risk behaviours, at three months or at 12 months. Secondary outcomes were likelihood of future visits, trust in the clinician after exit interview, clinician detection of emotional distress and fear and abuse in relationships, and emotional distress at three and 12 months. Patient acceptability of the screening tool was also described for the intervention arm. Analyses were adjusted for practice location and billing type, patients' sex, age, and recruitment method, and past health risks, where appropriate. An intention to treat analysis approach was used, which included multilevel multiple imputation for missing outcome data. RESULTS: 42 practices were randomly allocated to intervention or comparison arms. Two intervention practices withdrew post allocation, prior to training, leaving 19 intervention (53 clinicians, 377 patients) and 21 comparison (79 clinicians, 524 patients) practices. 69% of patients in both intervention (260) and comparison (360) arms completed the 12 month follow-up. Intervention clinicians discussed more health risks per patient (59.7%) than comparison clinicians (52.7%) and thus were more likely to detect a higher proportion of young people with at least one of the six health risk behaviours (38.4% vs 26.7%, risk difference [RD] 11.6%, Confidence Interval [CI] 2.93% to 20.3%; adjusted odds ratio [OR] 1.7, CI 1.1 to 2.5). Patients reported less illicit drug use (RD -6.0, CI -11 to -1.2; OR 0·52, CI 0·28 to 0·96), and less risk for STI (RD -5.4, CI -11 to 0.2; OR 0·66, CI 0·46 to 0·96) at three months in the intervention relative to the comparison arm, and for unplanned pregnancy at 12 months (RD -4.4; CI -8.7 to -0.1; OR 0·40, CI 0·20 to 0·80). No differences were detected between arms on other health risks. There were no differences on secondary outcomes, apart from a greater detection of abuse (OR 13.8, CI 1.71 to 111). There were no reports of harmful events and intervention arm youth had high acceptance of the screening tool. CONCLUSIONS: A complex intervention, compared to a simple educational seminar for practices, improved detection of health risk behaviours in young people. Impact on health outcomes was inconclusive. Technology enabling more efficient, systematic health-risk screening may allow providers to target counselling toward higher risk individuals. Further trials require more power to confirm health benefits. TRIAL REGISTRATION: ISRCTN.com ISRCTN16059206.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
In geriatrics, driving cessation is addressed within the biopsychosocial model. This has broadened the scope of practitioners, not only in terms of assessing fitness to drive, but also by helping to maintain social engagements and provide support for transport transition. Causes can be addressed at different levels by adapting medication, improving physical health, modifying behaviour, adapting lifestyle, or bringing changes to the environment. This transdisciplinary approach requires an understanding of how different disciplines are linked to each other. This article reviews the philosophical principles of causality between fields and provides a framework for understanding causality within the biopsychosocial model. Understanding interlevel constraints should help practitioners overcome their differences, and favor transversal approaches to driving cessation.
Resumo:
This chapter assesses the theories and related empirical evidence regarding the factors that explain cultural innovation by cultural organizations. It begins by defining key concepts, including what is meant by a cultural organization, cultural innovation, and the innovation referent. The chapter identifies two main disciplines that have been interested in cultural innovation or innovative programming by cultural organizations: sociology and economics. The focus, contributions, and overlap of these two disciplinary approaches to cultural innovation are discussed, and the chapter concludes by identifying some gaps and putting forward some suggestions for future research.
Resumo:
Climate change affects the rate of insect invasions as well as the abundance, distribution and impacts of such invasions on a global scale. Among the principal analytical approaches to predicting and understanding future impacts of biological invasions are Species Distribution Models (SDMs), typically in the form of correlative Ecological Niche Models (ENMs). An underlying assumption of ENMs is that species-environment relationships remain preserved during extrapolations in space and time, although this is widely criticised. The semi-mechanistic modelling platform, CLIMEX, employs a top-down approach using species ecophysiological traits and is able to avoid some of the issues of extrapolation, making it highly applicable to investigating biological invasions in the context of climate change. The tephritid fruit flies (Diptera: Tephritidae) comprise some of the most successful invasive species and serious economic pests around the world. Here we project 12 tephritid species CLIMEX models into future climate scenarios to examine overall patterns of climate suitability and forecast potential distributional changes for this group. We further compare the aggregate response of the group against species-specific responses. We then consider additional drivers of biological invasions to examine how invasion potential is influenced by climate, fruit production and trade indices. Considering the group of tephritid species examined here, climate change is predicted to decrease global climate suitability and to shift the cumulative distribution poleward. However, when examining species-level patterns, the predominant directionality of range shifts for 11 of the 12 species is eastward. Most notably, management will need to consider regional changes in fruit fly species invasion potential where high fruit production, trade indices and predicted distributions of these flies overlap.
Resumo:
Los medios de comunicación juegan un rol fundamental en la percepción social del proceso migratorio y de las minorías étnicas. A través del análisis de un conjunto de emisiones de Catalunya Radio, RAC1 y la COPE, el estudio que presentamos pretende determinar si el discurso radiofónico se fundamenta en un lenguaje inclusivo que contribuye a la cohesión social, o bien promueve un sobredimensionamiento de los conflictos contribuyendo a extender la xenofobia. Coincidiendo con la crisis, el lenguaje excluyente se consolida y se extiende en los medios. El análisis desarrollado evidencia el uso de conceptos y estrategias discursivas que remiten, por una parte, al racismo moderno y, por otra, al lenguaje del odio, siendo frecuentes las imprecisiones, la invisibilidad u homogeneización de determinados colectivos, la contraposición nosotrosellos o enfoques basados en el conflicto que enfatizan las diferencias étnicas.