933 resultados para Situation analysis
Resumo:
Research and professional practices have the joint aim of re-structuring the preconceived notions of reality. They both want to gain the understanding about social reality. Social workers use their professional competence in order to grasp the reality of their clients, while researchers’ pursuit is to open the secrecies of the research material. Development and research are now so intertwined and inherent in almost all professional practices that making distinctions between practising, developing and researching has become difficult and in many aspects irrelevant. Moving towards research-based practices is possible and it is easily applied within the framework of the qualitative research approach (Dominelli 2005, 235; Humphries 2005, 280). Social work can be understood as acts and speech acts crisscrossing between social workers and clients. When trying to catch the verbal and non-verbal hints of each others’ behaviour, the actors have to do a lot of interpretations in a more or less uncertain mental landscape. Our point of departure is the idea that the study of social work practices requires tools which effectively reveal the internal complexity of social work (see, for example, Adams & Dominelli & Payne 2005, 294 – 295). The boom of qualitative research methodologies in recent decades is associated with much profound the rupture in humanities, which is called the linguistic turn (Rorty 1967). The idea that language is not transparently mediating our perceptions and thoughts about reality, but on the contrary it constitutes it was new and even confusing to many social scientists. Nowadays we have got used to read research reports which have applied different branches of discursive analyses or narratologic or semiotic approaches. Although differences are sophisticated between those orientations they share the idea of the predominance of language. Despite the lively research work of today’s social work and the research-minded atmosphere of social work practice, semiotics has rarely applied in social work research. However, social work as a communicative practice concerns symbols, metaphors and all kinds of the representative structures of language. Those items are at the core of semiotics, the science of signs, and the science which examines people using signs in their mutual interaction and their endeavours to make the sense of the world they live in, their semiosis. When thinking of the practice of social work and doing the research of it, a number of interpretational levels ought to be passed before reaching the research phase in social work. First of all, social workers have to interpret their clients’ situations, which will be recorded in the files. In some very rare cases those past situations will be reflected in discussions or perhaps interviews or put under the scrutiny of some researcher in the future. Each and every new observation adds its own flavour to the mixture of meanings. Social workers have combined their observations with previous experience and professional knowledge, furthermore, the situation on hand also influences the reactions. In addition, the interpretations made by social workers over the course of their daily working routines are never limited to being part of the personal process of the social worker, but are also always inherently cultural. The work aiming at social change is defined by the presence of an initial situation, a specific goal, and the means and ways of achieving it, which are – or which should be – agreed upon by the social worker and the client in situation which is unique and at the same time socially-driven. Because of the inherent plot-based nature of social work, the practices related to it can be analysed as stories (see Dominelli 2005, 234), given, of course, that they are signifying and told by someone. The research of the practices is concentrating on impressions, perceptions, judgements, accounts, documents etc. All these multifarious elements can be scrutinized as textual corpora, but not whatever textual material. In semiotic analysis, the material studied is characterised as verbal or textual and loaded with meanings. We present a contribution of research methodology, semiotic analysis, which has to our mind at least implicitly references to the social work practices. Our examples of semiotic interpretation have been picked up from our dissertations (Laine 2005; Saurama 2002). The data are official documents from the archives of a child welfare agency and transcriptions of the interviews of shelter employees. These data can be defined as stories told by the social workers of what they have seen and felt. The official documents present only fragmentations and they are often written in passive form. (Saurama 2002, 70.) The interviews carried out in the shelters can be described as stories where the narrators are more familiar and known. The material is characterised by the interaction between the interviewer and interviewee. The levels of the story and the telling of the story become apparent when interviews or documents are examined with the use of semiotic tools. The roots of semiotic interpretation can be found in three different branches; the American pragmatism, Saussurean linguistics in Paris and the so called formalism in Moscow and Tartu; however in this paper we are engaged with the so called Parisian School of semiology which prominent figure was A. J. Greimas. The Finnish sociologists Pekka Sulkunen and Jukka Törrönen (1997a; 1997b) have further developed the ideas of Greimas in their studies on socio-semiotics, and we lean on their ideas. In semiotics social reality is conceived as a relationship between subjects, observations, and interpretations and it is seen mediated by natural language which is the most common sign system among human beings (Mounin 1985; de Saussure 2006; Sebeok 1986). Signification is an act of associating an abstract context (signified) to some physical instrument (signifier). These two elements together form the basic concept, the “sign”, which never constitutes any kind of meaning alone. The meaning will be comprised in a distinction process where signs are being related to other signs. In this chain of signs, the meaning becomes diverged from reality. (Greimas 1980, 28; Potter 1996, 70; de Saussure 2006, 46-48.) One interpretative tool is to think of speech as a surface under which deep structures – i.e. values and norms – exist (Greimas & Courtes 1982; Greimas 1987). To our mind semiotics is very much about playing with two different levels of text: the syntagmatic surface which is more or less faithful to the grammar, and the paradigmatic, semantic structure of values and norms hidden in the deeper meanings of interpretations. Semiotic analysis deals precisely with the level of meaning which exists under the surface, but the only way to reach those meanings is through the textual level, the written or spoken text. That is why the tools are needed. In our studies, we have used the semiotic square and the actant analysis. The former is based on the distinctions and the categorisations of meanings, and the latter on opening the plotting of narratives in order to reach the value structures.
Resumo:
Within the current context that favours the emergence of new diseases, syndromic surveillance (SyS) appears increasingly more relevant tool for the early detection of unexpected health events. The Triple-S project (Syndromic Surveillance Systems in Europe), co-financed by the European Commission, was launched in September 2010 for a three year period to promote both human and animal health SyS in European countries. Objectives of the project included performing an inventory of current and planned European animal health SyS systems and promoting knowledge transfer between SyS experts. This study presents and discusses the results of the Triple-S inventory of European veterinary SyS initiatives. European SyS systems were identified through an active process based on a questionnaire sent to animal health experts involved in SyS in Europe. Results were analyzed through a descriptive analysis and a multiple factor analysis (MFA) in order to establish a typology of the European SyS initiatives. Twenty seven European SyS systems were identified from twelve countries, at different levels of development, from project phase to active systems. Results of this inventory showed a real interest of European countries for SyS but also highlighted the novelty of this field. This survey highlighted the diversity of SyS systems in Europe in terms of objectives, population targeted, data providers, indicators monitored. For most SyS initiatives, statistical analysis of surveillance results was identified as a limitation in using the data. MFA results distinguished two types of systems. The first one belonged to the private sector, focused on companion animals and had reached a higher degree of achievement. The second one was based on mandatory collected data, targeted livestock species and is still in an early project phase. The exchange of knowledge between human and animal health sectors was considered useful to enhance SyS. In the same way that SyS is complementary to traditional surveillance, synergies between human and animal health SyS could be an added value, most notably to enhance timeliness, sensitivity and help interpreting non-specific signals.
Resumo:
Critical situations (CSs) involving football fans is a well-researched phenomenon with most studies examining factors leading to an escalation of violence (e.g. Braun & Vliegenthart, 2008). However, research so far has fallen short of analysing CSs that do not escalate (e.g. Hylander & Guvå, 2010) as well as establishing observable criteria that constitute such CSs. Granström et al. (2009), for instance, put forward a definition of a CS describing such situations as characterised by a discrepancy between peace and war-making behaviours between police and demonstrators. Still, this definition remains vague and does not provide concrete, defining criteria that can be identified on site. The present study looks beyond fans’ violent acts per se and focuses on these situations with a potentially – but not necessarily - violent outcome. The aim of this preliminary study is to identify observable criteria defining such a CS involving football fans. This focus group comprised of five experts working with football fans in the German-speaking area of Switzerland who discussed observable characteristics of a CS. Inductive content analysis led to the identification of specific criteria such as, “arrest of a fan”, “insufficient distance (<30m) between fans and police” and “fans mask themselves”. These criteria were then assigned to four phases of a CS highlighting the dynamic aspect of this phenomenon: Antecedents, Causes, Reactions, Consequence. Specifically, Causes, Reactions and Consequences are observable on site, while Antecedents include relevant, background information directly influencing a CS. This study puts forward a working definition of a CS that can facilitate the assessment of actual situations in the football context as well as for further research on fan violence prevention and control. These results also highlight similarities with studies investigating fan violence in other European countries while acknowledging unique characteristics of the Swiss German fan culture.
Resumo:
BACKGROUND Taking care of children diagnosed with cancer affects parents' professional life. The impact in the long-term however, is not clear. We aimed to compare the employment situation of parents of long-term childhood cancer survivors with control parents of the general population, and to identify clinical and socio-demographic factors associated with parental employment. METHODS As part of the Swiss Childhood Cancer Survivor Study, we sent a questionnaire to parents of survivors aged 5-15 years, who survived ≥5 years after diagnosis. Information on control parents of the general population came from the Swiss Health Survey (restricted to men and women with ≥1 child aged 5-15 years). Employment was categorized as not employed, part-time, and full-time employed. We used generalized ordered logistic regression to determine associations with clinical and socio-demographic factors. Clinical data was available from the Swiss Childhood Cancer Registry. RESULTS We included 394 parent-couples of survivors and 3'341 control parents (1'731 mothers; 1'610 fathers). Mothers of survivors were more often not employed (29% versus 22%; ptrend = 0.007). However, no differences between mothers were found in multivariable analysis. Fathers of survivors were more often employed full-time (93% versus 87%; ptrend = 0.002), which remained significant in multivariable analysis. Among parents of survivors, mothers with tertiary education (OR = 2.40, CI:1.14-5.07) were more likely to be employed. Having a migration background (OR = 3.63, CI: 1.71-7.71) increased the likelihood of being full-time employed in mothers of survivors. Less likely to be employed were mothers of survivors diagnosed with lymphoma (OR = 0.31, CI:0.13-0.73) and >2 children (OR = 0.48, CI:0.30-0.75); and fathers of survivors who had had a relapse (OR = 0.13, CI:0.04-0.36). CONCLUSION Employment situation of parents of long-term survivors reflected the more traditional parenting roles. Specific support for parents with low education, additional children, and whose child had a more severe cancer disease could improve their long-term employment situation.
Resumo:
This research examines the site and situation characteristics of community trails as landscapes promoting physical activity. Trail segment and neighborhood characteristics for six trails in urban, suburban, and exurban towns in northeastern Massachusetts were assessed from primary Global Positioning System (GPS) data and from secondary Census and land use data integrated in a geographic information system (GIS). Correlations between neighborhood street and housing density, land use mix, and sociodemographic characteristics and trail segment characteristics and amenities measure the degree to which trail segment attributes are associated with the surrounding neighborhood characteristics.
Resumo:
Proton therapy is growing increasingly popular due to its superior dose characteristics compared to conventional photon therapy. Protons travel a finite range in the patient body and stop, thereby delivering no dose beyond their range. However, because the range of a proton beam is heavily dependent on the tissue density along its beam path, uncertainties in patient setup position and inherent range calculation can degrade thedose distribution significantly. Despite these challenges that are unique to proton therapy, current management of the uncertainties during treatment planning of proton therapy has been similar to that of conventional photon therapy. The goal of this dissertation research was to develop a treatment planning method and a planevaluation method that address proton-specific issues regarding setup and range uncertainties. Treatment plan designing method adapted to proton therapy: Currently, for proton therapy using a scanning beam delivery system, setup uncertainties are largely accounted for by geometrically expanding a clinical target volume (CTV) to a planning target volume (PTV). However, a PTV alone cannot adequately account for range uncertainties coupled to misaligned patient anatomy in the beam path since it does not account for the change in tissue density. In order to remedy this problem, we proposed a beam-specific PTV (bsPTV) that accounts for the change in tissue density along the beam path due to the uncertainties. Our proposed method was successfully implemented, and its superiority over the conventional PTV was shown through a controlled experiment.. Furthermore, we have shown that the bsPTV concept can be incorporated into beam angle optimization for better target coverage and normal tissue sparing for a selected lung cancer patient. Treatment plan evaluation method adapted to proton therapy: The dose-volume histogram of the clinical target volume (CTV) or any other volumes of interest at the time of planning does not represent the most probable dosimetric outcome of a given plan as it does not include the uncertainties mentioned earlier. Currently, the PTV is used as a surrogate of the CTV’s worst case scenario for target dose estimation. However, because proton dose distributions are subject to change under these uncertainties, the validity of the PTV analysis method is questionable. In order to remedy this problem, we proposed the use of statistical parameters to quantify uncertainties on both the dose-volume histogram and dose distribution directly. The robust plan analysis tool was successfully implemented to compute both the expectation value and its standard deviation of dosimetric parameters of a treatment plan under the uncertainties. For 15 lung cancer patients, the proposed method was used to quantify the dosimetric difference between the nominal situation and its expected value under the uncertainties.
Resumo:
On the strongly karstified and almost unvegetated surface of the Zugspitzplatt, at an altitude of about 2290 m in the Wettersteingebirge, there is a doline within which over a period of several thousand years a bed of fine loess-like sediment, almost 1m thick, has accumulated. Notwithstanding the situation of this locality far above the present tree-line, this infill contains quantities of pollen and spores sufficient for pollen analysis without use of any enrichment techniques. Despite poor pollen preservation, it was possible to date the basal layers of this profile on the basis of their pollen assemblages. AMS dating (7415 ± 30 BP) has confirmed that the oldest sediments were laid down during the early Atlantic period, the time of the thermal optimum of the Holocene. At least since that time this site has never been overridden by a glacier. The moraine associated with the Löbben Oscillation between 3400 and 3100 BP - here represented by the so-called Platt Stillstand (Plattstand) - did not quite reach the doline. A diagram shows known Holocene glacial limits. The composition of the pollen assemblages from the two oldest levels with high pollen concentrations strongly suggests that the distance between the doline and the forest was much less during the Atlantic than at present.
Resumo:
Public participation is an integral part of Environmental Impact Assessment (EIA), and as such, has been incorporated into regulatory norms. Assessment of the effectiveness of public participation has remained elusive however. This is partly due to the difficulty in identifying appropriate effectiveness criteria. This research uses Q methodology to discover and analyze stakeholder's social perspectives of the effectiveness of EIAs in the Western Cape, South Africa. It considers two case studies (Main Road and Saldanha Bay EIAs) for contextual participant perspectives of the effectiveness based on their experience. It further considers the more general opinion of provincial consent regulator staff at the Department of Environmental Affairs and the Department of Planning (DEA&DP). Two main themes of investigation are drawn from the South African National Environmental Management Act imperative for effectiveness: firstly, the participation procedure, and secondly, the stakeholder capabilities necessary for effective participation. Four theoretical frameworks drawn from planning, politics and EIA theory are adapted to public participation and used to triangulate the analysis and discussion of the revealed social perspectives. They consider citizen power in deliberation, Habermas' preconditions for the Ideal Speech Situation (ISS), a Foucauldian perspective of knowledge, power and politics, and a Capabilities Approach to public participation effectiveness. The empirical evidence from this research shows that the capacity and contextual constraints faced by participants demand the legislative imperatives for effective participation set out in the NEMA. The implementation of effective public participation has been shown to be a complex, dynamic and sometimes nebulous practice. The functional level of participant understanding of the process was found to be significantly wide-ranging with consequences of unequal and dissatisfied stakeholder engagements. Furthermore, the considerable variance of stakeholder capabilities in the South African social context, resulted in inequalities in deliberation. The social perspectives revealed significant differences in participant experience in terms of citizen power in deliberation. The ISS preconditions are highly contested in both the Saldanha EIA case study and the DEA&DP social perspectives. Only one Main Road EIA case study social perspective considered Foucault's notion of governmentality as a reality in EIA public participation. The freedom of control of ones environment, based on a Capabilities approach, is a highly contested notion. Although agreed with in principle, all of the social perspectives indicate that contextual and capacity realities constrain its realisation. This research has shown that Q method can be applied to EIA public participation in South Africa and, with the appropriate research or monitoring applications it could serve as a useful feedback tool to inform best practice public participation.
Resumo:
Over 100 samples of recent surface sediments from the bottomn of the Atlantic Ocean offshore NW Africa between 34° and 6° N have been analysed palynologically. The objective of this study was to reveal the relation between source areas, transport systems, and resulting distribution patterns of pollen and spores in marine sediments off NW Africa, in order to lay a sound foundation for the interpretation of pollen records of marine cores from this area. The clear zonation of the NW-African vegetation (due to the distinct climatic gradient) is helpful in determining main source areas, and the presence of some major wind belts facilitates the registration of the average course of wind trajectories. The present circulation pattern is driven by the intertropical front (ITCZ) which shifts over the continent between c. 22° N (summer position) and c. 4° N (winter position) in the course of the year. Determination of the period of main pollen release and the average atmospheric circulation pattern effective at that time of the years is of prime importance. The distribution patterns in recent marine sediments of pollen of a series of genera and families appear to record climatological/ecological variables, such as the trajectory of the NE trade, January trades, African Easterly Jet (Saharan Air Layer), the northernmost and southernmost position of the intertropical convergence zone, and the extent and latitudinal situation of the NW-African vegetation belt. Pollen analysis of a series of dated deep-sea cores taken between c. 35° and the equator off NW African enable the construction of paleo-distribution maps for time slices of the past, forming a register of paleoclimatological/paleoecological information.
Resumo:
A clash between the police and journalists covering a Falun Gong gathering in Surabaya 2011 have shown a significant change in understanding the triangular relationship between Indonesia, China and the Ethnic Chinese in Indonesia. During the Suharto period, ethnic Chinese in Indonesia and China as a foreign state were the problems for the Indonesian government. After the political reforms in Indonesia together with the Rise of China in 2000s, in some situation, it is the Indonesian government together with the Chinese government which is the problem for some ethnic Chinese in Indonesia. Ethnic Chinese people were seen to be close with China and their loyalty to the nation was doubted. But now it is the Indonesian government which is viewed as being too close to China and thus harming national integrity, and suspected of being unnationalistic.
Resumo:
This paper investigates the current situation of industrial agglomeration in Costa Rica, utilizing firm-level panel data for the period 2008-2012. We calculated Location Quotient and Theil Index based on employment by industry and found that 14 cantons have the industrial agglomerations for 9 industries. The analysis is in line with the nature of specific industries, the development of areas of concentration around free zones, and the evolving participation of Costa Rica in GVCs.
Resumo:
An important step to assess water availability is to have monthly time series representative of the current situation. In this context, a simple methodology is presented for application in large-scale studies in regions where a properly calibrated hydrologic model is not available, using the output variables simulated by regional climate models (RCMs) of the European project PRUDENCE under current climate conditions (period 1961–1990). The methodology compares different interpolation methods and alternatives to generate annual times series that minimise the bias with respect to observed values. The objective is to identify the best alternative to obtain bias-corrected, monthly runoff time series from the output of RCM simulations. This study uses information from 338 basins in Spain that cover the entire mainland territory and whose observed values of natural runoff have been estimated by the distributed hydrological model SIMPA. Four interpolation methods for downscaling runoff to the basin scale from 10 RCMs are compared with emphasis on the ability of each method to reproduce the observed behaviour of this variable. The alternatives consider the use of the direct runoff of the RCMs and the mean annual runoff calculated using five functional forms of the aridity index, defined as the ratio between potential evapotranspiration and precipitation. In addition, the comparison with respect to the global runoff reference of the UNH/GRDC dataset is evaluated, as a contrast of the “best estimator” of current runoff on a large scale. Results show that the bias is minimised using the direct original interpolation method and the best alternative for bias correction of the monthly direct runoff time series of RCMs is the UNH/GRDC dataset, although the formula proposed by Schreiber (1904) also gives good results
Resumo:
In the present uncertain global context of reaching an equal social stability and steady thriving economy, power demand expected to grow and global electricity generation could nearly double from 2005 to 2030. Fossil fuels will remain a significant contribution on this energy mix up to 2050, with an expected part of around 70% of global and ca. 60% of European electricity generation. Coal will remain a key player. Hence, a direct effect on the considered CO2 emissions business-as-usual scenario is expected, forecasting three times the present CO2 concentration values up to 1,200ppm by the end of this century. Kyoto protocol was the first approach to take global responsibility onto CO2 emissions monitoring and cap targets by 2012 with reference to 1990. Some of principal CO2emitters did not ratify the reduction targets. Although USA and China spur are taking its own actions and parallel reduction measures. More efficient combustion processes comprising less fuel consuming, a significant contribution from the electricity generation sector to a CO2 dwindling concentration levels, might not be sufficient. Carbon Capture and Storage (CCS) technologies have started to gain more importance from the beginning of the decade, with research and funds coming out to drive its come in useful. After first researching projects and initial scale testing, three principal capture processes came out available today with first figures showing up to 90% CO2 removal by its standard applications in coal fired power stations. Regarding last part of CO2 reduction chain, two options could be considered worthy, reusing (EOR & EGR) and storage. The study evaluates the state of the CO2 capture technology development, availability and investment cost of the different technologies, with few operation cost analysis possible at the time. Main findings and the abatement potential for coal applications are presented. DOE, NETL, MIT, European universities and research institutions, key technology enterprises and utilities, and key technology suppliers are the main sources of this study. A vision of the technology deployment is presented.
Resumo:
Around ten years ago investigation of technical and material construction in Ancient Roma has advanced in favour to obtain positive results. This process has been directed to obtaining some dates based in chemical composition, also action and reaction of materials against meteorological assaults or post depositional displacements. Plenty of these dates should be interpreted as a result of deterioration and damage in concrete material made in one landscape with some kind of meteorological characteristics. Concrete mixture like calcium and gypsum mortars should be analysed in laboratory test programs, and not only with descriptions based in reference books of Strabo, Pliny the Elder or Vitruvius. Roman manufacture was determined by weather condition, landscape, natural resources and of course, economic situation of the owner. In any case we must research the work in every facts of construction. On the one hand, thanks to chemical techniques like X-ray diffraction and Optical microscopy, we could know the granular disposition of mixture. On the other hand if we develop physical and mechanical techniques like compressive strength, capillary absorption on contact or water behaviour, we could know the reactions in binder and aggregates against weather effects. However we must be capable of interpret these results. Last year many analyses developed in archaeological sites in Spain has contributed to obtain different point of view, so has provide new dates to manage one method to continue the investigation of roman mortars. If we developed chemical and physical analysis in roman mortars at the same time, and we are capable to interpret the construction and the resources used, we achieve to understand the process of construction, the date and also the way of restoration in future.
Resumo:
Pragmatism is the leading motivation of regularization. We can understand regularization as a modification of the maximum-likelihood estimator so that a reasonable answer could be given in an unstable or ill-posed situation. To mention some typical examples, this happens when fitting parametric or non-parametric models with more parameters than data or when estimating large covariance matrices. Regularization is usually used, in addition, to improve the bias-variance tradeoff of an estimation. Then, the definition of regularization is quite general, and, although the introduction of a penalty is probably the most popular type, it is just one out of multiple forms of regularization. In this dissertation, we focus on the applications of regularization for obtaining sparse or parsimonious representations, where only a subset of the inputs is used. A particular form of regularization, L1-regularization, plays a key role for reaching sparsity. Most of the contributions presented here revolve around L1-regularization, although other forms of regularization are explored (also pursuing sparsity in some sense). In addition to present a compact review of L1-regularization and its applications in statistical and machine learning, we devise methodology for regression, supervised classification and structure induction of graphical models. Within the regression paradigm, we focus on kernel smoothing learning, proposing techniques for kernel design that are suitable for high dimensional settings and sparse regression functions. We also present an application of regularized regression techniques for modeling the response of biological neurons. Supervised classification advances deal, on the one hand, with the application of regularization for obtaining a na¨ıve Bayes classifier and, on the other hand, with a novel algorithm for brain-computer interface design that uses group regularization in an efficient manner. Finally, we present a heuristic for inducing structures of Gaussian Bayesian networks using L1-regularization as a filter. El pragmatismo es la principal motivación de la regularización. Podemos entender la regularización como una modificación del estimador de máxima verosimilitud, de tal manera que se pueda dar una respuesta cuando la configuración del problema es inestable. A modo de ejemplo, podemos mencionar el ajuste de modelos paramétricos o no paramétricos cuando hay más parámetros que casos en el conjunto de datos, o la estimación de grandes matrices de covarianzas. Se suele recurrir a la regularización, además, para mejorar el compromiso sesgo-varianza en una estimación. Por tanto, la definición de regularización es muy general y, aunque la introducción de una función de penalización es probablemente el método más popular, éste es sólo uno de entre varias posibilidades. En esta tesis se ha trabajado en aplicaciones de regularización para obtener representaciones dispersas, donde sólo se usa un subconjunto de las entradas. En particular, la regularización L1 juega un papel clave en la búsqueda de dicha dispersión. La mayor parte de las contribuciones presentadas en la tesis giran alrededor de la regularización L1, aunque también se exploran otras formas de regularización (que igualmente persiguen un modelo disperso). Además de presentar una revisión de la regularización L1 y sus aplicaciones en estadística y aprendizaje de máquina, se ha desarrollado metodología para regresión, clasificación supervisada y aprendizaje de estructura en modelos gráficos. Dentro de la regresión, se ha trabajado principalmente en métodos de regresión local, proponiendo técnicas de diseño del kernel que sean adecuadas a configuraciones de alta dimensionalidad y funciones de regresión dispersas. También se presenta una aplicación de las técnicas de regresión regularizada para modelar la respuesta de neuronas reales. Los avances en clasificación supervisada tratan, por una parte, con el uso de regularización para obtener un clasificador naive Bayes y, por otra parte, con el desarrollo de un algoritmo que usa regularización por grupos de una manera eficiente y que se ha aplicado al diseño de interfaces cerebromáquina. Finalmente, se presenta una heurística para inducir la estructura de redes Bayesianas Gaussianas usando regularización L1 a modo de filtro.