974 resultados para Linear decision rules


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The research considers the problem of spatial data classification using machine learning algorithms: probabilistic neural networks (PNN) and support vector machines (SVM). As a benchmark model simple k-nearest neighbor algorithm is considered. PNN is a neural network reformulation of well known nonparametric principles of probability density modeling using kernel density estimator and Bayesian optimal or maximum a posteriori decision rules. PNN is well suited to problems where not only predictions but also quantification of accuracy and integration of prior information are necessary. An important property of PNN is that they can be easily used in decision support systems dealing with problems of automatic classification. Support vector machine is an implementation of the principles of statistical learning theory for the classification tasks. Recently they were successfully applied for different environmental topics: classification of soil types and hydro-geological units, optimization of monitoring networks, susceptibility mapping of natural hazards. In the present paper both simulated and real data case studies (low and high dimensional) are considered. The main attention is paid to the detection and learning of spatial patterns by the algorithms applied.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dans cette thèse, nous étudions les aspects comportementaux d'agents qui interagissent dans des systèmes de files d'attente à l'aide de modèles de simulation et de méthodologies expérimentales. Chaque période les clients doivent choisir un prestataire de servivce. L'objectif est d'analyser l'impact des décisions des clients et des prestataires sur la formation des files d'attente. Dans un premier cas nous considérons des clients ayant un certain degré d'aversion au risque. Sur la base de leur perception de l'attente moyenne et de la variabilité de cette attente, ils forment une estimation de la limite supérieure de l'attente chez chacun des prestataires. Chaque période, ils choisissent le prestataire pour lequel cette estimation est la plus basse. Nos résultats indiquent qu'il n'y a pas de relation monotone entre le degré d'aversion au risque et la performance globale. En effet, une population de clients ayant un degré d'aversion au risque intermédiaire encoure généralement une attente moyenne plus élevée qu'une population d'agents indifférents au risque ou très averses au risque. Ensuite, nous incorporons les décisions des prestataires en leur permettant d'ajuster leur capacité de service sur la base de leur perception de la fréquence moyenne d'arrivées. Les résultats montrent que le comportement des clients et les décisions des prestataires présentent une forte "dépendance au sentier". En outre, nous montrons que les décisions des prestataires font converger l'attente moyenne pondérée vers l'attente de référence du marché. Finalement, une expérience de laboratoire dans laquelle des sujets jouent le rôle de prestataire de service nous a permis de conclure que les délais d'installation et de démantèlement de capacité affectent de manière significative la performance et les décisions des sujets. En particulier, les décisions du prestataire, sont influencées par ses commandes en carnet, sa capacité de service actuellement disponible et les décisions d'ajustement de capacité qu'il a prises, mais pas encore implémentées. - Queuing is a fact of life that we witness daily. We all have had the experience of waiting in line for some reason and we also know that it is an annoying situation. As the adage says "time is money"; this is perhaps the best way of stating what queuing problems mean for customers. Human beings are not very tolerant, but they are even less so when having to wait in line for service. Banks, roads, post offices and restaurants are just some examples where people must wait for service. Studies of queuing phenomena have typically addressed the optimisation of performance measures (e.g. average waiting time, queue length and server utilisation rates) and the analysis of equilibrium solutions. The individual behaviour of the agents involved in queueing systems and their decision making process have received little attention. Although this work has been useful to improve the efficiency of many queueing systems, or to design new processes in social and physical systems, it has only provided us with a limited ability to explain the behaviour observed in many real queues. In this dissertation we differ from this traditional research by analysing how the agents involved in the system make decisions instead of focusing on optimising performance measures or analysing an equilibrium solution. This dissertation builds on and extends the framework proposed by van Ackere and Larsen (2004) and van Ackere et al. (2010). We focus on studying behavioural aspects in queueing systems and incorporate this still underdeveloped framework into the operations management field. In the first chapter of this thesis we provide a general introduction to the area, as well as an overview of the results. In Chapters 2 and 3, we use Cellular Automata (CA) to model service systems where captive interacting customers must decide each period which facility to join for service. They base this decision on their expectations of sojourn times. Each period, customers use new information (their most recent experience and that of their best performing neighbour) to form expectations of sojourn time at the different facilities. Customers update their expectations using an adaptive expectations process to combine their memory and their new information. We label "conservative" those customers who give more weight to their memory than to the xiv Summary new information. In contrast, when they give more weight to new information, we call them "reactive". In Chapter 2, we consider customers with different degree of risk-aversion who take into account uncertainty. They choose which facility to join based on an estimated upper-bound of the sojourn time which they compute using their perceptions of the average sojourn time and the level of uncertainty. We assume the same exogenous service capacity for all facilities, which remains constant throughout. We first analyse the collective behaviour generated by the customers' decisions. We show that the system achieves low weighted average sojourn times when the collective behaviour results in neighbourhoods of customers loyal to a facility and the customers are approximately equally split among all facilities. The lowest weighted average sojourn time is achieved when exactly the same number of customers patronises each facility, implying that they do not wish to switch facility. In this case, the system has achieved the Nash equilibrium. We show that there is a non-monotonic relationship between the degree of risk-aversion and system performance. Customers with an intermediate degree of riskaversion typically achieve higher sojourn times; in particular they rarely achieve the Nash equilibrium. Risk-neutral customers have the highest probability of achieving the Nash Equilibrium. Chapter 3 considers a service system similar to the previous one but with risk-neutral customers, and relaxes the assumption of exogenous service rates. In this sense, we model a queueing system with endogenous service rates by enabling managers to adjust the service capacity of the facilities. We assume that managers do so based on their perceptions of the arrival rates and use the same principle of adaptive expectations to model these perceptions. We consider service systems in which the managers' decisions take time to be implemented. Managers are characterised by a profile which is determined by the speed at which they update their perceptions, the speed at which they take decisions, and how coherent they are when accounting for their previous decisions still to be implemented when taking their next decision. We find that the managers' decisions exhibit a strong path-dependence: owing to the initial conditions of the model, the facilities of managers with identical profiles can evolve completely differently. In some cases the system becomes "locked-in" into a monopoly or duopoly situation. The competition between managers causes the weighted average sojourn time of the system to converge to the exogenous benchmark value which they use to estimate their desired capacity. Concerning the managers' profile, we found that the more conservative Summary xv a manager is regarding new information, the larger the market share his facility achieves. Additionally, the faster he takes decisions, the higher the probability that he achieves a monopoly position. In Chapter 4 we consider a one-server queueing system with non-captive customers. We carry out an experiment aimed at analysing the way human subjects, taking on the role of the manager, take decisions in a laboratory regarding the capacity of a service facility. We adapt the model proposed by van Ackere et al (2010). This model relaxes the assumption of a captive market and allows current customers to decide whether or not to use the facility. Additionally the facility also has potential customers who currently do not patronise it, but might consider doing so in the future. We identify three groups of subjects whose decisions cause similar behavioural patterns. These groups are labelled: gradual investors, lumpy investors, and random investor. Using an autocorrelation analysis of the subjects' decisions, we illustrate that these decisions are positively correlated to the decisions taken one period early. Subsequently we formulate a heuristic to model the decision rule considered by subjects in the laboratory. We found that this decision rule fits very well for those subjects who gradually adjust capacity, but it does not capture the behaviour of the subjects of the other two groups. In Chapter 5 we summarise the results and provide suggestions for further work. Our main contribution is the use of simulation and experimental methodologies to explain the collective behaviour generated by customers' and managers' decisions in queueing systems as well as the analysis of the individual behaviour of these agents. In this way, we differ from the typical literature related to queueing systems which focuses on optimising performance measures and the analysis of equilibrium solutions. Our work can be seen as a first step towards understanding the interaction between customer behaviour and the capacity adjustment process in queueing systems. This framework is still in its early stages and accordingly there is a large potential for further work that spans several research topics. Interesting extensions to this work include incorporating other characteristics of queueing systems which affect the customers' experience (e.g. balking, reneging and jockeying); providing customers and managers with additional information to take their decisions (e.g. service price, quality, customers' profile); analysing different decision rules and studying other characteristics which determine the profile of customers and managers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Development of three classification trees (CT) based on the CART (Classification and Regression Trees), CHAID (Chi-Square Automatic Interaction Detection) and C4.5 methodologies for the calculation of probability of hospital mortality; the comparison of the results with the APACHE II, SAPS II and MPM II-24 scores, and with a model based on multiple logistic regression (LR). Methods: Retrospective study of 2864 patients. Random partition (70:30) into a Development Set (DS) n = 1808 and Validation Set (VS) n = 808. Their properties of discrimination are compared with the ROC curve (AUC CI 95%), Percent of correct classification (PCC CI 95%); and the calibration with the Calibration Curve and the Standardized Mortality Ratio (SMR CI 95%). Results: CTs are produced with a different selection of variables and decision rules: CART (5 variables and 8 decision rules), CHAID (7 variables and 15 rules) and C4.5 (6 variables and 10 rules). The common variables were: inotropic therapy, Glasgow, age, (A-a)O2 gradient and antecedent of chronic illness. In VS: all the models achieved acceptable discrimination with AUC above 0.7. CT: CART (0.75(0.71-0.81)), CHAID (0.76(0.72-0.79)) and C4.5 (0.76(0.73-0.80)). PCC: CART (72(69- 75)), CHAID (72(69-75)) and C4.5 (76(73-79)). Calibration (SMR) better in the CT: CART (1.04(0.95-1.31)), CHAID (1.06(0.97-1.15) and C4.5 (1.08(0.98-1.16)). Conclusion: With different methodologies of CTs, trees are generated with different selection of variables and decision rules. The CTs are easy to interpret, and they stratify the risk of hospital mortality. The CTs should be taken into account for the classification of the prognosis of critically ill patients.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Para este trabajo se ha desarrollado un programa en Matlab, que nos permite realizar ensayos con algunas de las herramientas fundamentales del análisis técnico. Concretamente nos hemos centrado en el “Indicador de Movimiento Direccional” de Wilder. El programa está formado por seis funciones que permiten descargar datos, hacer la simulación del indicador, ajustar automáticamente algunos de sus parámetros y presentar los resultados obtenidos en la simulación. Con los experimentos y simulaciones realizadas se ha visto la importancia de escoger adecuadamente los períodos de ±DIs (indicadores direccionales positivo y negativo) y el ADX (Average Directional Movement Index). También hemos visto que la reglas decisión apuntadas por autores de reconocido prestigio como Cava y Ortiz ,no siempre se comportan como cabría esperar. Se propone mejorar el rendimiento y la fiabilidad de este indicador Incluyendo alguna media móvil de los precios y el volumen de contratación, en los criterios de decisión. También se podría mejorar implementando un sistema para que se pudiesen autoajustar los criterios de decisión.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis consists of four articles and an introductory section. The main research questions in all the articles refer to the changes in the representativeness of the Finnish Paper Workers' Union. Representativeness stands for the entire entity of external, internal, legal and reputational factors that enable the labor union to represent its members and achieve its goals. This concept is based on an extensive reading of quantitative and qualitative industrial relations literature, which includes works based on Marxist labor-capital relations (such as Hyman's industrial relations studies), and more recent union density studies as well as gender- and ethnic diversity-based 'union revitalization' studies. Müller-Jentsch's German studies of industrial relations have been of particular importance as well as Streeck's industrial unionism and technology studies. The concept of representativeness is an attempt to combine the insights of these diverse strands of literature and bring the scientific discussion of labor unions back to the core of a union's function: representing its members. As such, it can be seen as a theoretical innovation. The concept helps to acknowledge both the heterogeneity of the membership and the totality of a labor union organization. The concept of representativeness aims to move beyond notions of 'power'. External representativeness can be expressed through the position of the labor union in the industrial relations system and the economy. Internal representativeness focuses on the aspects of labor unions that relate to the function of the union as an association with members, such as internal democracy. Legal representativeness lies in the formal legal position of the union – its rights and instruments. This includes collective bargaining legislation, co-decision rules and industrial conflict legislation. Reputational representativeness is related to how the union is seen by other actors and the general public, and can be approximated using data on strike activity. All these aspects of representativeness are path-dependent, and show the results of previous struggles over issues. The concept of representativeness goes beyond notions of labor union power and symbolizes an attempt to bring back the focus of industrial relations studies to the union's basic function of representing its members. The first article shows in detail the industrial conflict of the Finnish paper industry in 2005. The intended focus was the issue of gender in the negotiations over a new collective agreement, but the focal point of the industrial conflict was the issue of outsourcing and how this should be organized. Also, the issue of continuous shifts as an issue of working time was very important. The drawn-out conflict can be seen as a struggle over principles, and under pressure the labor union had to concede ground on the aforementioned issues. The article concludes that in this specific conflict, the union represented its' female members to a lesser extent, because the other issues took such priority. Furthermore, because of the substantive concessions. the union lost some of its internal representativeness, and the stubbornness of the union may have even harmed the reputation of the union. This article also includes an early version of the representativeness framework, through which this conflict is analyzed. The second article discusses wage developments, union density and collective bargaining within the context of representativeness. It is shown that the union has been able to secure substantial benefits for its members, regardless of declining employment. Collective agreements have often been based on centralized incomes policies, but the paper sector has not always joined these. Attention is furthermore paid to the changing competition of the General Assembly, with a surprisingly strong position of the Left Alliance still. In an attempt to replicate analysis of union density measures, an analysis of sectoral union density shows that similar factors as in aggregate data influence this measure, though – due to methodological issues – the results may not be robust. On this issue, it can be said that the method of analysis for aggregate union density is not suitable for sectoral union density analysis. The increasingly conflict-ridden industrial relations predicted have not actually materialized. The article concludes by asking whether the aim of ever-increasing wages is a sustainable one in the light of the pressures of globalization, though wage costs are a relatively small part of total costs. The third article discusses the history and use of outsourcing in the Finnish paper industry. It is shown using Hyman's framework of constituencies that over time, the perspective of the union changed from 'members of the Paper Workers' Union' to a more specific view of who is a core member of the union. Within the context of the industrial unionism that the union claims to practice, this is an important change. The article shows that the union more and more caters for a core group, while auxiliary personnel is less important to the union's identity and constituencies, which means that the union's internal representativeness has decreased. Maintenance workers are an exception; the union and employers have developed a rotating system that increases the efficient allocation of these employees. The core reason of the exceptional status of maintenance personnel is their high level of non-transferable skills. In the end it is debatable whether the compromise on outsourcing solves the challenges facing the industry. The fourth article shows diverging discourses within the union with regard to union-employer partnership for competitiveness improvements and instruments of local union representatives. In the collective agreement of 2008, the provision regulating wage effects of significant changes in the organization or content of work was thoroughly changed, though this mainly reflected decisions by the Labor Court on the pre-2008 version of the provision. This change laid bare the deep rift between the Social Democratic and Left Alliance (ex-Communist) factions of the union. The article argues that through the changed legal meaning of the provision, the union was able to transform concession bargaining into a basis for partnership. The internal discontent about this issue is nonetheless substantial and a threat to the unity of the union, both locally and at the union level. On the basis of the results of the articles, other factors influencing representativeness, such as technology and EU law and an overview of the main changes in the Finnish paper industry, it is concluded that, especially in recent years, the Finnish Paper Workers' Union has lost some of its representativeness. In particular, the loss of the efficiency of strikes is noted, the compromise on outsourcing which may have alienated a substantial part of the union's membership, and the change in the collective agreement of 2008 have caused this decline. In the latter case, the internal disunion on that issue shows the constraints of the union's internal democracy. Furthermore, the failure of the union to join the TEAM industrial union (by democratic means), the internal conflicts and a narrow focus on its own sector may also hurt the union in the future, as the paper industry in Finland is going through a structural change. None of these changes in representativeness would have been so drastic without the considerable pressure of globalization - in particular changing markets, changing technology and a loss of domestic investments to foreign investments, which in the end have benefited the corporations more than the Finnish employees of these corporations. Taken together, the union risks becoming socially irrelevant in time, though it will remain formally very strong on the basis of its institutional setting and financial situation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Väitöstutkimuksen kohteena on säädösten valmistelu ja niitä koskevaa päätöksenteko Euroopan unionissa erityisesti siitä näkökulmasta, miten Suomen kaltainen pieni jäsenvaltio voi vaikuttaa EU-säädöksiin. Väitöskirjassa analysoidaan unionin toimielinten välillä vallitsevaa dynamiikkaa ja Suomen asemaa erityisesti EUT-sopimuksen 289 artiklan 1 kohdan ja 294 artiklan mukaisessa tavallisessa lainsäätämisjärjestyksessä. Lissabonin sopimuksen voimaantulon jälkeen tavallinen lainsäätämisjärjestys, joka aiemmin tunnettiin yhteispäätösmenettelynä, on selvästi yleisin lainsäädäntömenettely unionissa. Väitöskirja koostuu kuudesta erillisjulkaistusta pääosin vertaisarvioidusta artikkelista ja niitä täydentävästä ja kokoavasta yhteenveto-osasta. Kirjan tämä painos sisältää vain yhteenvetoluvun, ei erikseen julkaistuja artikkeleita. Väitöskirjassa hyödynnetään eurooppaoikeuden ja politiikan tutkimuksen kirjallisuutta. Metodologisesti väitöstutkimus edustaa empiiristä oikeustutkimusta, jossa yhdistyy lainopillinen analyysi ja empiiristen, tässä tapauksessa lähinnä laadullisten aineistojen analyysi. Yhteenvedossa on seurattu lainsäädäntömuutoksia ja oikeuskäytäntöä 10. huhtikuuta 2015 asti. Väitöskirjatutkimuksen kantavana teemana on oikeuden ja politiikan suhde EUlainsäätämisessä. Artikkeleita ja yhteenvetoa sitovat yhteen kaksi yleisen tason argumenttia. Ensiksi, EU:n lainsäädäntömenettelyä koskevat oikeussäännöt ja institutionalisoituneet käytännöt luovat kehikon toimielinten sisäiselle päätöksenteolle sekä niiden välisille poliittisluonteisille neuvotteluille, vaikkakaan sääntöihin ja käytäntöihin ei yleensä ole tarvetta nimenomaisesti vedota menettelyn kuluessa. Toiseksi, koska Suomen kaltaisen pienen jäsenvaltion muodollinen valta – siis äänimäärä neuvostossa – on hyvin rajallinen, suomalaisten ministerien ja virkamiesten tulisi hyödyntää erilaisia epävirallisia vaikuttamiskanavia, jos halutaan vahvistaa Suomen tosiasiallista vaikutusvaltaa menettelyssä. Unionin lainsäädäntötoiminta ei tyypillisesti ole rationaalisen mallin mukaan etenevää päätöksentekoa, vaan tempoilevaa ja vaikeasti ennakoitavaa kamppailua eri preferenssejä edustavien toimijoiden välillä. Väitöskirjan ensimmäisessä artikkelissa analysoidaan säädösvalmistelua ja lainsäätämismenettelyä unionissa vaihe vaiheelta. Johtopäätöksenä todetaan, että unioniin on syntynyt yhteispäätösmenettelyn, sittemmin tavallisen lainsäätämisjärjestyksen myötä uudenlainen lainsäätämiskulttuuri, jolle on leimallista tiiviit yhteydet komission, Euroopan parlamentin ja neuvoston välillä. Toimielimet ottavat nykyisin joustavasti huomioon toistensa kantoja menettelyn edetessä, mikä mahdollistaa sen, että valtaosa EU-säädöksistä voidaan hyväksyä jo ensimmäisessä käsittelyssä. Toisessa tutkimusartikkelissa analysoidaan komission asemaa unionin toimielinrakenteessa. Artikkelissa tarkastellaan komission aloiteoikeutta sekä komission puheenjohtajan ja sen jäsenten valintamenettelyjä siitä näkökulmasta, edistääkö komissio todella unionin yleistä etua itsenäisenä ja riippumattomana, kuten EU-sopimuksen 17 artiklassa edellytetään. Tiettyjen järjestelyjen myötä Euroopan parlamentin ja komission suhde on kehittynyt siihen suuntaan, että komissio toimii jossain määrin parlamentille vastuunalaisena hallituksena. Artikkelissa kritisoidaan, että kehitys ei välttämättä lähennä kansalaisia unionin toimielimiin ja että kehitys omiaan vaarantamaan komission aseman verrattain riippumattomana välittäjänä trilogeissa. Kolmas artikkeli sisältää tapaustutkimuksen kuluttajille myönnettäviä luottoja sääntelevän direktiivin (2008/48/EY) valmisteluvaiheista. Tapaustutkimus konkretisoi Suomen hallituksen edustajien tekemän EU-vaikuttamisen keinoja, vahvuuksia ja kehittämiskohteita. Artikkelissa todetaan, että Suomelle aivan keskeinen vaikuttamisresurssi ovat sellaiset virkamiehet, jotka hallitsevat niin käsiteltävän säädöshankkeen sisältökysymykset kuin unionin päätöksentekomenettelyt ja toimielinten institutionalisoituneet käytännöt. Artikkelissa tehdyt empiiriset havainnot jäsenvaltioiden välillä käydyistä neuvotteluista tukevat konstruktiivisen mallin perusoletuksia. Neljännessä artikkelissa, joka on laadittu yhteistyönä professori Tapio Raunion kanssa, analysoidaan unioniasioiden kansallista valmistelua ja tarkemmin ottaen sitä, miten Suomen neuvottelukannat muotoutuvat valtioneuvoston yhteensovittamisjärjestelmän ylimmällä tasolla EU-ministerivaliokunnassa. Artikkelissa todetaan laajan pöytäkirja-aineiston ja sitä täydentävän haastatteluaineiston pohjalta, että EUministerivaliokunnan asialistan laadinta on delegoitu kokonaisuudessaan asiantuntijavirkamiehille. Lisäksi asialistan muotoutumiseen vaikuttaa luonnollisesti unionin toimielinten, erityisesti Eurooppa-neuvoston agenda. Toisaalta, EU-ministerivaliokunnan kokouksissa ministerit yksin tekevät päätöksiä ja linjaavat Suomen EU-politiikkaa. Viidennessä artikkelissa selvitetään, miten olisi toimittava, jos pyritään siihen, että uusi tai muutettu EU-säädös vastaisi mahdollisimman pitkälti Suomen kansallisesti määriteltyä neuvottelukantaa. Tehokkainta on vaikuttaa aloiteoikeutta lainsäädäntömenettelyssä käyttävään komissioon, tarvittaessa myös virkahierarkian ylimmillä tasoilla, sekä tehdä yhteistyötä muiden jäsenvaltioiden kanssa, erityisesti puheenjohtajavaltion, tulevien puheenjohtajavaltioiden ja suurten jäsenvaltioiden kanssa. Mikäli käsittelyssä oleva EU-säädöshanke arvioidaan kansallisesti erityisen tärkeiksi tai ongelmalliseksi, tulisi vaikuttamistoimia laajentaa kattamaan myös Euroopan parlamentin avainhenkilöitä. Kuudennessa artikkelissa analysoidaan suomalaisen kansalaisyhteiskunnan ja etujärjestöjen vaikutusmahdollisuuksia EU-asioiden valmistelussa. Johtopäätöksenä todetaan, että muodollinen yhteensovittaminen EU-valmistelujaostojen laajan kokoonpanon kokouksissa ei ole sidosryhmille ensisijainen eikä tehokkain vaikuttamisen keino. Sen sijaan korostuvat epäviralliset yhteydet toimivaltaisen ministeriön vastuuvirkamieheen kotimaassa ja vaikuttaminen eurooppalaisen kattojärjestön välityksellä. Väitöskirjan yhteenveto-osassa on eritelty, missä EU:n säädösvalmistelun ja lainsäätämismenettelyn vaiheissa Suomen kaltaisella pienellä jäsenvaltiolla on parhaat edellytykset vaikuttaa valmisteltavana olevaan säädökseen. Parhaat vaikutusmahdollisuudet ovat aivan EU-säädöksen elinkaaren alkuvaiheessa, kun komissio on vasta käynnistämässä uutta säädösvalmistelua. Väitöstutkimuksessa todetaan, että varhaista kannanmuodostusta ja sen mahdollistamaa ennakkovaikuttamista on Suomessa kyetty kehittämään etenkin niissä poliittisesti, taloudellisesti tai oikeudellisesti tärkeissä hankkeissa, joissa hallituksen kannanmuodostus tapahtuu EU-ministerivaliokunnassa. Muissa unionin säädöshankkeissa ennakollisen vaikuttamisen intensiteetti näyttäisi vaihtelevan, riippuen muun muassa toimivaltaisen ministeriön keskijohdon ja ylimmän johdon sitoutumisesta. Toinen Suomelle otollinen vaikuttamisen ajankohta on silloin, kun komission antamaa ehdotusta käsitellään asiantuntijavirkamiesten kesken neuvoston työryhmässä. Tehokas vaikuttaminen edellyttää, että Suomea neuvotteluissa edustavat henkilöt kokoavat ”samanmielisistä” jäsenvaltioista kaksoisenemmistösäännön mukaisen voittavan koalition. Viimeinen vaikuttamisen ikkuna aukeaa silloin, kun Coreper-komiteassa laaditaan neuvoston puheenjohtajalle neuvottelumandaattia toimielinten välisiin trilogeihin tavallisen lainsäätämisjärjestyksen ensimmäisessä käsittelyssä. Tässä varsin myöhäisessä menettelyvaiheessa vaikuttaminen on pienen jäsenvaltion näkökulmasta jo selvästi vaikeampaa. Väitöskirja sijoittuu luontevasti osaksi valtiotieteellistä eurooppalaistumis-kirjallisuutta siltä osin, kuin siinä on tutkittu EU-jäsenyyden vaikutuksia kotimaisiin hallinnon rakenteisiin ja politiikan asialistaan. Kuten tunnettua, Suomen EU-politiikka rakentuu eduskunnalle vastuullisen valtioneuvoston varaan. Väitöskirjassa ei kuitenkaan ole otettu erityiseen tarkasteluun perustuslakiin sidottua eduskunnan ja hallituksen yhteistoimintaa EU-asioissa. Sen sijaan on tutkittu unioniasioiden valmistelua ja yhteensovittamista valtioneuvoston sisällä. Kun EU-asioiden yhteensovittamisjärjestelmää luotiin, pidettiin tärkeänä, että jokaisessa säädöshankkeessa ja politiikkahankkeessa kyetään muodostamaan kansallisesti yksi ja yhtenäinen neuvottelupositio. Yhtenäisen kansallisen linjan ajamisen katsottiin parantavan Suomen asemaa unionin päätöksenteossa. Väitöskirjassa todetaan johtopäätöksenä, että EU-asioiden kansallinen valmistelujärjestelmä toteuttaa sille asetetut tavoitteet käytännössä varsin hyvin. Merkittävin kehittämiskohde liittyy kansallisen EU-valmistelun reaktiivisuuteen. Jos Suomi haluaa vaikuttaa yhä vahvemmin EU-lainsäätämiseen, Suomelle tärkeät hankkeet pitäisi tunnistaa jo varhaisessa vaiheessa ja priorisoida selkeästi niiden hoitamista ministeriöissä.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Rough Set Data Analysis (RSDA) is a non-invasive data analysis approach that solely relies on the data to find patterns and decision rules. Despite its noninvasive approach and ability to generate human readable rules, classical RSDA has not been successfully used in commercial data mining and rule generating engines. The reason is its scalability. Classical RSDA slows down a great deal with the larger data sets and takes much longer times to generate the rules. This research is aimed to address the issue of scalability in rough sets by improving the performance of the attribute reduction step of the classical RSDA - which is the root cause of its slow performance. We propose to move the entire attribute reduction process into the database. We defined a new schema to store the initial data set. We then defined SOL queries on this new schema to find the attribute reducts correctly and faster than the traditional RSDA approach. We tested our technique on two typical data sets and compared our results with the traditional RSDA approach for attribute reduction. In the end we also highlighted some of the issues with our proposed approach which could lead to future research.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the last decade, the potential macroeconomic effects of intermittent large adjustments in microeconomic decision variables such as prices, investment, consumption of durables or employment – a behavior which may be justified by the presence of kinked adjustment costs – have been studied in models where economic agents continuously observe the optimal level of their decision variable. In this paper, we develop a simple model which introduces infrequent information in a kinked adjustment cost model by assuming that agents do not observe continuously the frictionless optimal level of the control variable. Periodic releases of macroeconomic statistics or dividend announcements are examples of such infrequent information arrivals. We first solve for the optimal individual decision rule, that is found to be both state and time dependent. We then develop an aggregation framework to study the macroeconomic implications of such optimal individual decision rules. Our model has the distinct characteristic that a vast number of agents tend to act together, and more so when uncertainty is large. The average effect of an aggregate shock is inversely related to its size and to aggregate uncertainty. We show that these results differ substantially from the ones obtained with full information adjustment cost models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper proposes a definition of relative uncertainty aversion for decision models under complete uncertainty. It is shown that, for a large class of decision rules characterized by a set of plausible axioms, the new criterion yields a complete ranking of those rules with respect to the relative degree of uncertainty aversion they represent. In addition, we address a combinatorial question that arises in this context, and we examine conditions for the additive representability of our rules.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Les milieux humides remplissent plusieurs fonctions écologiques d’importance et contribuent à la biodiversité de la faune et de la flore. Même s’il existe une reconnaissance croissante sur l’importante de protéger ces milieux, il n’en demeure pas moins que leur intégrité est encore menacée par la pression des activités humaines. L’inventaire et le suivi systématique des milieux humides constituent une nécessité et la télédétection est le seul moyen réaliste d’atteindre ce but. L’objectif de cette thèse consiste à contribuer et à améliorer la caractérisation des milieux humides en utilisant des données satellites acquises par des radars polarimétriques en bande L (ALOS-PALSAR) et C (RADARSAT-2). Cette thèse se fonde sur deux hypothèses (chap. 1). La première hypothèse stipule que les classes de physionomies végétales, basées sur la structure des végétaux, sont plus appropriées que les classes d’espèces végétales car mieux adaptées au contenu informationnel des images radar polarimétriques. La seconde hypothèse stipule que les algorithmes de décompositions polarimétriques permettent une extraction optimale de l’information polarimétrique comparativement à une approche multipolarisée basée sur les canaux de polarisation HH, HV et VV (chap. 3). En particulier, l’apport de la décomposition incohérente de Touzi pour l’inventaire et le suivi de milieux humides est examiné en détail. Cette décomposition permet de caractériser le type de diffusion, la phase, l’orientation, la symétrie, le degré de polarisation et la puissance rétrodiffusée d’une cible à l’aide d’une série de paramètres extraits d’une analyse des vecteurs et des valeurs propres de la matrice de cohérence. La région du lac Saint-Pierre a été sélectionnée comme site d’étude étant donné la grande diversité de ses milieux humides qui y couvrent plus de 20 000 ha. L’un des défis posés par cette thèse consiste au fait qu’il n’existe pas de système standard énumérant l’ensemble possible des classes physionomiques ni d’indications précises quant à leurs caractéristiques et dimensions. Une grande attention a donc été portée à la création de ces classes par recoupement de sources de données diverses et plus de 50 espèces végétales ont été regroupées en 9 classes physionomiques (chap. 7, 8 et 9). Plusieurs analyses sont proposées pour valider les hypothèses de cette thèse (chap. 9). Des analyses de sensibilité par diffusiogramme sont utilisées pour étudier les caractéristiques et la dispersion des physionomies végétales dans différents espaces constitués de paramètres polarimétriques ou canaux de polarisation (chap. 10 et 12). Des séries temporelles d’images RADARSAT-2 sont utilisées pour approfondir la compréhension de l’évolution saisonnière des physionomies végétales (chap. 12). L’algorithme de la divergence transformée est utilisé pour quantifier la séparabilité entre les classes physionomiques et pour identifier le ou les paramètres ayant le plus contribué(s) à leur séparabilité (chap. 11 et 13). Des classifications sont aussi proposées et les résultats comparés à une carte existante des milieux humide du lac Saint-Pierre (14). Finalement, une analyse du potentiel des paramètres polarimétrique en bande C et L est proposé pour le suivi de l’hydrologie des tourbières (chap. 15 et 16). Les analyses de sensibilité montrent que les paramètres de la 1re composante, relatifs à la portion dominante (polarisée) du signal, sont suffisants pour une caractérisation générale des physionomies végétales. Les paramètres des 2e et 3e composantes sont cependant nécessaires pour obtenir de meilleures séparabilités entre les classes (chap. 11 et 13) et une meilleure discrimination entre milieux humides et milieux secs (chap. 14). Cette thèse montre qu’il est préférable de considérer individuellement les paramètres des 1re, 2e et 3e composantes plutôt que leur somme pondérée par leurs valeurs propres respectives (chap. 10 et 12). Cette thèse examine également la complémentarité entre les paramètres de structure et ceux relatifs à la puissance rétrodiffusée, souvent ignorée et normalisée par la plupart des décompositions polarimétriques. La dimension temporelle (saisonnière) est essentielle pour la caractérisation et la classification des physionomies végétales (chap. 12, 13 et 14). Des images acquises au printemps (avril et mai) sont nécessaires pour discriminer les milieux secs des milieux humides alors que des images acquises en été (juillet et août) sont nécessaires pour raffiner la classification des physionomies végétales. Un arbre hiérarchique de classification développé dans cette thèse constitue une synthèse des connaissances acquises (chap. 14). À l’aide d’un nombre relativement réduit de paramètres polarimétriques et de règles de décisions simples, il est possible d’identifier, entre autres, trois classes de bas marais et de discriminer avec succès les hauts marais herbacés des autres classes physionomiques sans avoir recours à des sources de données auxiliaires. Les résultats obtenus sont comparables à ceux provenant d’une classification supervisée utilisant deux images Landsat-5 avec une exactitude globale de 77.3% et 79.0% respectivement. Diverses classifications utilisant la machine à vecteurs de support (SVM) permettent de reproduire les résultats obtenus avec l’arbre hiérarchique de classification. L’exploitation d’une plus forte dimensionalitée par le SVM, avec une précision globale maximale de 79.1%, ne permet cependant pas d’obtenir des résultats significativement meilleurs. Finalement, la phase de la décomposition de Touzi apparaît être le seul paramètre (en bande L) sensible aux variations du niveau d’eau sous la surface des tourbières ouvertes (chap. 16). Ce paramètre offre donc un grand potentiel pour le suivi de l’hydrologie des tourbières comparativement à la différence de phase entre les canaux HH et VV. Cette thèse démontre que les paramètres de la décomposition de Touzi permettent une meilleure caractérisation, de meilleures séparabilités et de meilleures classifications des physionomies végétales des milieux humides que les canaux de polarisation HH, HV et VV. Le regroupement des espèces végétales en classes physionomiques est un concept valable. Mais certaines espèces végétales partageant une physionomie similaire, mais occupant un milieu différent (haut vs bas marais), ont cependant présenté des différences significatives quant aux propriétés de leur rétrodiffusion.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introducción: El síncope es un motivo frecuente de consulta en Urgencias, definir el estudio y el destino de estos pacientes es motivo de controversia. Se han diseñado varias escalas para estratificación del riesgo en pacientes con esta entidad. En este estudio se comparan las características operativas de 4 escalas para la decisión de hospitalizar pacientes con síncope que consultan a un servicio de urgencias de una institución de III y IV Nivel Métodos. Estudio analítico transversal, en el que se aplicaron las 4 escalas de riesgo a pacientes que consultaron por síncope al servicio de Urgencias durante un período de 6 meses y que fueron hospitalizados en la institución donde se realizó. Se evaluaron los resultados aplicando el programa Epidat 3.1 para sensibilidad y especificidad, índice de Youden. Resultados. Se incluyeron en total 91 pacientes. La sensibilidad de las escalas San Francisco, OESIL, EGSYS y la institucional para el requerimiento de hospitalización fue de 79%, 87%. 63% y 95% respectivamente y la especificidad fue de 52%, 40%, 64% y 14%. EL riesgo de mortalidad no fue adecuadamente detectado por la escala de San Francisco.. Conclusiones. Ninguna de las escalas aplicadas a los pacientes hospitalizados que consultaron por síncope a urgencias superó el juicio clínico para definir la hospitalización. Sin embargo, la escala OESIL y la institucional pueden ayudar a corroborar la decisión clínica de hospitalizar en esta población.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of the study was to establish and verify a predictive vegetation model for plant community distribution in the alti-Mediterranean zone of the Lefka Ori massif, western Crete. Based on previous work three variables were identified as significant determinants of plant community distribution, namely altitude, slope angle and geomorphic landform. The response of four community types against these variables was tested using classification trees analysis in order to model community type occurrence. V-fold cross-validation plots were used to determine the length of the best fitting tree. The final 9node tree selected, classified correctly 92.5% of the samples. The results were used to provide decision rules for the construction of a spatial model for each community type. The model was implemented within a Geographical Information System (GIS) to predict the distribution of each community type in the study site. The evaluation of the model in the field using an error matrix gave an overall accuracy of 71%. The user's accuracy was higher for the Crepis-Cirsium (100%) and Telephium-Herniaria community type (66.7%) and relatively lower for the Peucedanum-Alyssum and Dianthus-Lomelosia community types (63.2% and 62.5%, respectively). Misclassification and field validation points to the need for improved geomorphological mapping and suggests the presence of transitional communities between existing community types.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of the study was to establish and verify a predictive vegetation model for plant community distribution in the alti-Mediterranean zone of the Lefka Ori massif, western Crete. Based on previous work three variables were identified as significant determinants of plant community distribution, namely altitude, slope angle and geomorphic landform. The response of four community types against these variables was tested using classification trees analysis in order to model community type occurrence. V-fold cross-validation plots were used to determine the length of the best fitting tree. The final 9node tree selected, classified correctly 92.5% of the samples. The results were used to provide decision rules for the construction of a spatial model for each community type. The model was implemented within a Geographical Information System (GIS) to predict the distribution of each community type in the study site. The evaluation of the model in the field using an error matrix gave an overall accuracy of 71%. The user's accuracy was higher for the Crepis-Cirsium (100%) and Telephium-Herniaria community type (66.7%) and relatively lower for the Peucedanum-Alyssum and Dianthus-Lomelosia community types (63.2% and 62.5%, respectively). Misclassification and field validation points to the need for improved geomorphological mapping and suggests the presence of transitional communities between existing community types.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper the properties of a hydro-meteorological forecasting system for forecasting river flows have been analysed using a probabilistic forecast convergence score (FCS). The focus on fixed event forecasts provides a forecaster's approach to system behaviour and adds an important perspective to the suite of forecast verification tools commonly used in this field. A low FCS indicates a more consistent forecast. It can be demonstrated that the FCS annual maximum decreases over the last 10 years. With lead time, the FCS of the ensemble forecast decreases whereas the control and high resolution forecast increase. The FCS is influenced by the lead time, threshold and catchment size and location. It indicates that one should use seasonality based decision rules to issue flood warnings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The induction of classification rules from previously unseen examples is one of the most important data mining tasks in science as well as commercial applications. In order to reduce the influence of noise in the data, ensemble learners are often applied. However, most ensemble learners are based on decision tree classifiers which are affected by noise. The Random Prism classifier has recently been proposed as an alternative to the popular Random Forests classifier, which is based on decision trees. Random Prism is based on the Prism family of algorithms, which is more robust to noise. However, like most ensemble classification approaches, Random Prism also does not scale well on large training data. This paper presents a thorough discussion of Random Prism and a recently proposed parallel version of it called Parallel Random Prism. Parallel Random Prism is based on the MapReduce programming paradigm. The paper provides, for the first time, novel theoretical analysis of the proposed technique and in-depth experimental study that show that Parallel Random Prism scales well on a large number of training examples, a large number of data features and a large number of processors. Expressiveness of decision rules that our technique produces makes it a natural choice for Big Data applications where informed decision making increases the user’s trust in the system.