615 resultados para Unbalanced Bidding
Resumo:
Abstract We consider a wide class of models that includes the highly reliable Markovian systems (HRMS) often used to represent the evolution of multi-component systems in reliability settings. Repair times and component lifetimes are random variables that follow a general distribution, and the repair service adopts a priority repair rule based on system failure risk. Since crude simulation has proved to be inefficient for highly-dependable systems, the RESTART method is used for the estimation of steady-state unavailability and other reliability measures. In this method, a number of simulation retrials are performed when the process enters regions of the state space where the chance of occurrence of a rare event (e.g., a system failure) is higher. The main difficulty involved in applying this method is finding a suitable function, called the importance function, to define the regions. In this paper we introduce an importance function which, for unbalanced systems, represents a great improvement over the importance function used in previous papers. We also demonstrate the asymptotic optimality of RESTART estimators in these models. Several examples are presented to show the effectiveness of the new approach, and probabilities up to the order of 10-42 are accurately estimated with little computational effort.
Resumo:
La familia de algoritmos de Boosting son un tipo de técnicas de clasificación y regresión que han demostrado ser muy eficaces en problemas de Visión Computacional. Tal es el caso de los problemas de detección, de seguimiento o bien de reconocimiento de caras, personas, objetos deformables y acciones. El primer y más popular algoritmo de Boosting, AdaBoost, fue concebido para problemas binarios. Desde entonces, muchas han sido las propuestas que han aparecido con objeto de trasladarlo a otros dominios más generales: multiclase, multilabel, con costes, etc. Nuestro interés se centra en extender AdaBoost al terreno de la clasificación multiclase, considerándolo como un primer paso para posteriores ampliaciones. En la presente tesis proponemos dos algoritmos de Boosting para problemas multiclase basados en nuevas derivaciones del concepto margen. El primero de ellos, PIBoost, está concebido para abordar el problema descomponiéndolo en subproblemas binarios. Por un lado, usamos una codificación vectorial para representar etiquetas y, por otro, utilizamos la función de pérdida exponencial multiclase para evaluar las respuestas. Esta codificación produce un conjunto de valores margen que conllevan un rango de penalizaciones en caso de fallo y recompensas en caso de acierto. La optimización iterativa del modelo genera un proceso de Boosting asimétrico cuyos costes dependen del número de etiquetas separadas por cada clasificador débil. De este modo nuestro algoritmo de Boosting tiene en cuenta el desbalanceo debido a las clases a la hora de construir el clasificador. El resultado es un método bien fundamentado que extiende de manera canónica al AdaBoost original. El segundo algoritmo propuesto, BAdaCost, está concebido para problemas multiclase dotados de una matriz de costes. Motivados por los escasos trabajos dedicados a generalizar AdaBoost al terreno multiclase con costes, hemos propuesto un nuevo concepto de margen que, a su vez, permite derivar una función de pérdida adecuada para evaluar costes. Consideramos nuestro algoritmo como la extensión más canónica de AdaBoost para este tipo de problemas, ya que generaliza a los algoritmos SAMME, Cost-Sensitive AdaBoost y PIBoost. Por otro lado, sugerimos un simple procedimiento para calcular matrices de coste adecuadas para mejorar el rendimiento de Boosting a la hora de abordar problemas estándar y problemas con datos desbalanceados. Una serie de experimentos nos sirven para demostrar la efectividad de ambos métodos frente a otros conocidos algoritmos de Boosting multiclase en sus respectivas áreas. En dichos experimentos se usan bases de datos de referencia en el área de Machine Learning, en primer lugar para minimizar errores y en segundo lugar para minimizar costes. Además, hemos podido aplicar BAdaCost con éxito a un proceso de segmentación, un caso particular de problema con datos desbalanceados. Concluimos justificando el horizonte de futuro que encierra el marco de trabajo que presentamos, tanto por su aplicabilidad como por su flexibilidad teórica. Abstract The family of Boosting algorithms represents a type of classification and regression approach that has shown to be very effective in Computer Vision problems. Such is the case of detection, tracking and recognition of faces, people, deformable objects and actions. The first and most popular algorithm, AdaBoost, was introduced in the context of binary classification. Since then, many works have been proposed to extend it to the more general multi-class, multi-label, costsensitive, etc... domains. Our interest is centered in extending AdaBoost to two problems in the multi-class field, considering it a first step for upcoming generalizations. In this dissertation we propose two Boosting algorithms for multi-class classification based on new generalizations of the concept of margin. The first of them, PIBoost, is conceived to tackle the multi-class problem by solving many binary sub-problems. We use a vectorial codification to represent class labels and a multi-class exponential loss function to evaluate classifier responses. This representation produces a set of margin values that provide a range of penalties for failures and rewards for successes. The stagewise optimization of this model introduces an asymmetric Boosting procedure whose costs depend on the number of classes separated by each weak-learner. In this way the Boosting procedure takes into account class imbalances when building the ensemble. The resulting algorithm is a well grounded method that canonically extends the original AdaBoost. The second algorithm proposed, BAdaCost, is conceived for multi-class problems endowed with a cost matrix. Motivated by the few cost-sensitive extensions of AdaBoost to the multi-class field, we propose a new margin that, in turn, yields a new loss function appropriate for evaluating costs. Since BAdaCost generalizes SAMME, Cost-Sensitive AdaBoost and PIBoost algorithms, we consider our algorithm as a canonical extension of AdaBoost to this kind of problems. We additionally suggest a simple procedure to compute cost matrices that improve the performance of Boosting in standard and unbalanced problems. A set of experiments is carried out to demonstrate the effectiveness of both methods against other relevant Boosting algorithms in their respective areas. In the experiments we resort to benchmark data sets used in the Machine Learning community, firstly for minimizing classification errors and secondly for minimizing costs. In addition, we successfully applied BAdaCost to a segmentation task, a particular problem in presence of imbalanced data. We conclude the thesis justifying the horizon of future improvements encompassed in our framework, due to its applicability and theoretical flexibility.
Resumo:
Los Centros de Datos se encuentran actualmente en cualquier sector de la economía mundial. Están compuestos por miles de servidores, dando servicio a los usuarios de forma global, las 24 horas del día y los 365 días del año. Durante los últimos años, las aplicaciones del ámbito de la e-Ciencia, como la e-Salud o las Ciudades Inteligentes han experimentado un desarrollo muy significativo. La necesidad de manejar de forma eficiente las necesidades de cómputo de aplicaciones de nueva generación, junto con la creciente demanda de recursos en aplicaciones tradicionales, han facilitado el rápido crecimiento y la proliferación de los Centros de Datos. El principal inconveniente de este aumento de capacidad ha sido el rápido y dramático incremento del consumo energético de estas infraestructuras. En 2010, la factura eléctrica de los Centros de Datos representaba el 1.3% del consumo eléctrico mundial. Sólo en el año 2012, el consumo de potencia de los Centros de Datos creció un 63%, alcanzando los 38GW. En 2013 se estimó un crecimiento de otro 17%, hasta llegar a los 43GW. Además, los Centros de Datos son responsables de más del 2% del total de emisiones de dióxido de carbono a la atmósfera. Esta tesis doctoral se enfrenta al problema energético proponiendo técnicas proactivas y reactivas conscientes de la temperatura y de la energía, que contribuyen a tener Centros de Datos más eficientes. Este trabajo desarrolla modelos de energía y utiliza el conocimiento sobre la demanda energética de la carga de trabajo a ejecutar y de los recursos de computación y refrigeración del Centro de Datos para optimizar el consumo. Además, los Centros de Datos son considerados como un elemento crucial dentro del marco de la aplicación ejecutada, optimizando no sólo el consumo del Centro de Datos sino el consumo energético global de la aplicación. Los principales componentes del consumo en los Centros de Datos son la potencia de computación utilizada por los equipos de IT, y la refrigeración necesaria para mantener los servidores dentro de un rango de temperatura de trabajo que asegure su correcto funcionamiento. Debido a la relación cúbica entre la velocidad de los ventiladores y el consumo de los mismos, las soluciones basadas en el sobre-aprovisionamiento de aire frío al servidor generalmente tienen como resultado ineficiencias energéticas. Por otro lado, temperaturas más elevadas en el procesador llevan a un consumo de fugas mayor, debido a la relación exponencial del consumo de fugas con la temperatura. Además, las características de la carga de trabajo y las políticas de asignación de recursos tienen un impacto importante en los balances entre corriente de fugas y consumo de refrigeración. La primera gran contribución de este trabajo es el desarrollo de modelos de potencia y temperatura que permiten describes estos balances entre corriente de fugas y refrigeración; así como la propuesta de estrategias para minimizar el consumo del servidor por medio de la asignación conjunta de refrigeración y carga desde una perspectiva multivariable. Cuando escalamos a nivel del Centro de Datos, observamos un comportamiento similar en términos del balance entre corrientes de fugas y refrigeración. Conforme aumenta la temperatura de la sala, mejora la eficiencia de la refrigeración. Sin embargo, este incremente de la temperatura de sala provoca un aumento en la temperatura de la CPU y, por tanto, también del consumo de fugas. Además, la dinámica de la sala tiene un comportamiento muy desigual, no equilibrado, debido a la asignación de carga y a la heterogeneidad en el equipamiento de IT. La segunda contribución de esta tesis es la propuesta de técnicas de asigación conscientes de la temperatura y heterogeneidad que permiten optimizar conjuntamente la asignación de tareas y refrigeración a los servidores. Estas estrategias necesitan estar respaldadas por modelos flexibles, que puedan trabajar en tiempo real, para describir el sistema desde un nivel de abstracción alto. Dentro del ámbito de las aplicaciones de nueva generación, las decisiones tomadas en el nivel de aplicación pueden tener un impacto dramático en el consumo energético de niveles de abstracción menores, como por ejemplo, en el Centro de Datos. Es importante considerar las relaciones entre todos los agentes computacionales implicados en el problema, de forma que puedan cooperar para conseguir el objetivo común de reducir el coste energético global del sistema. La tercera contribución de esta tesis es el desarrollo de optimizaciones energéticas para la aplicación global por medio de la evaluación de los costes de ejecutar parte del procesado necesario en otros niveles de abstracción, que van desde los nodos hasta el Centro de Datos, por medio de técnicas de balanceo de carga. Como resumen, el trabajo presentado en esta tesis lleva a cabo contribuciones en el modelado y optimización consciente del consumo por fugas y la refrigeración de servidores; el modelado de los Centros de Datos y el desarrollo de políticas de asignación conscientes de la heterogeneidad; y desarrolla mecanismos para la optimización energética de aplicaciones de nueva generación desde varios niveles de abstracción. ABSTRACT Data centers are easily found in every sector of the worldwide economy. They consist of tens of thousands of servers, serving millions of users globally and 24-7. In the last years, e-Science applications such e-Health or Smart Cities have experienced a significant development. The need to deal efficiently with the computational needs of next-generation applications together with the increasing demand for higher resources in traditional applications has facilitated the rapid proliferation and growing of data centers. A drawback to this capacity growth has been the rapid increase of the energy consumption of these facilities. In 2010, data center electricity represented 1.3% of all the electricity use in the world. In year 2012 alone, global data center power demand grew 63% to 38GW. A further rise of 17% to 43GW was estimated in 2013. Moreover, data centers are responsible for more than 2% of total carbon dioxide emissions. This PhD Thesis addresses the energy challenge by proposing proactive and reactive thermal and energy-aware optimization techniques that contribute to place data centers on a more scalable curve. This work develops energy models and uses the knowledge about the energy demand of the workload to be executed and the computational and cooling resources available at data center to optimize energy consumption. Moreover, data centers are considered as a crucial element within their application framework, optimizing not only the energy consumption of the facility, but the global energy consumption of the application. The main contributors to the energy consumption in a data center are the computing power drawn by IT equipment and the cooling power needed to keep the servers within a certain temperature range that ensures safe operation. Because of the cubic relation of fan power with fan speed, solutions based on over-provisioning cold air into the server usually lead to inefficiencies. On the other hand, higher chip temperatures lead to higher leakage power because of the exponential dependence of leakage on temperature. Moreover, workload characteristics as well as allocation policies also have an important impact on the leakage-cooling tradeoffs. The first key contribution of this work is the development of power and temperature models that accurately describe the leakage-cooling tradeoffs at the server level, and the proposal of strategies to minimize server energy via joint cooling and workload management from a multivariate perspective. When scaling to the data center level, a similar behavior in terms of leakage-temperature tradeoffs can be observed. As room temperature raises, the efficiency of data room cooling units improves. However, as we increase room temperature, CPU temperature raises and so does leakage power. Moreover, the thermal dynamics of a data room exhibit unbalanced patterns due to both the workload allocation and the heterogeneity of computing equipment. The second main contribution is the proposal of thermal- and heterogeneity-aware workload management techniques that jointly optimize the allocation of computation and cooling to servers. These strategies need to be backed up by flexible room level models, able to work on runtime, that describe the system from a high level perspective. Within the framework of next-generation applications, decisions taken at this scope can have a dramatical impact on the energy consumption of lower abstraction levels, i.e. the data center facility. It is important to consider the relationships between all the computational agents involved in the problem, so that they can cooperate to achieve the common goal of reducing energy in the overall system. The third main contribution is the energy optimization of the overall application by evaluating the energy costs of performing part of the processing in any of the different abstraction layers, from the node to the data center, via workload management and off-loading techniques. In summary, the work presented in this PhD Thesis, makes contributions on leakage and cooling aware server modeling and optimization, data center thermal modeling and heterogeneityaware data center resource allocation, and develops mechanisms for the energy optimization for next-generation applications from a multi-layer perspective.
Resumo:
Las instituciones de educación superior deben gestionar eficaz y eficientemente sus procesos de captación de nuevos estudiantes, y con este objetivo necesitan mejorar su comprensión sobre los antecedentes que inciden en la intención de recomendarlas. Por lo que esta Tesis Doctoral se centra en el estudio y análisis de las componentes de la calidad del servicio de la educación superior, como antecedentes de la intención de recomendación de una institución universitaria. El enfoque que se adopta en esta investigación integra las dimensiones de calidad docente y de calidad de servicio e incorpora en el análisis la valoración global de la carrera. Paralelamente se contempla la moderación de la experiencia y el desempeño académico del estudiante. En esta Tesis Doctoral se hace uso de la Encuesta de Calidad de la Universidad ORT Uruguay cedida a los autores para su explotación con fines de investigación. Los estudiantes cumplimentan la encuesta semestralmente con carácter obligatorio en una plataforma en línea autoadministrada, que permite identificar las valoraciones realizadas por los estudiantes a lo largo de su paso por la universidad. Por lo que la base de datos es un panel no balanceado que consta de 195.058 registros obtenidos, a partir de 7.077 estudiantes en 17 semestres (marzo de 2003 a 2011). La metodología se concreta en los Modelos de Ecuaciones Estructurales, que proporciona una serie de ventajas con respecto a otras aproximaciones. Una de las más importantes es que permite al investigador introducir información a priori y valorar su inclusión, además de reformular las modelizaciones propuestas desde una perspectiva multi-muestra. En esta investigación se estiman los modelos con MPLUS 7. Entre las principales conclusiones de esta Tesis Doctoral cabe señalar que las percepciones de calidad, servicio, docencia y carrera, inciden positivamente en la intención de recomendar la universidad, y que la variable experiencia del estudiante modera dichas relaciones. Los resultados señalan, en general, que a medida que los estudiantes avanzan en su carrera, los efectos totales de la percepción de la calidad del servicio en la calidad global de la carrera y en la intención de recomendar la universidad son mayores que los efectos que tiene la percepción de calidad de la docencia. Estos hallazgos señalan la necesidad que tienen estas instituciones de educación superior de incorporar en su planificación estratégica la segmentación de los estudiantes según su experiencia. ABSTRACT For institutions of higher education to effectively and efficiently manage their processes for attracting new students, they need to understand the influences that activate student intentions to recommend a program and/or college. This Thesis describes research identifying the quality components of a university that serve as antecedents of student intentions to recommend. The research design integrates teaching and service dimensions of higher education, as well as measures of student perceptions of the overall quality of a program. And introduces the student quality and student experience during the program as moderators of these relationships. This Thesis makes use of the Quality Survey of the Universidad ORT Uruguay ceded to the authors for their exploitation for research purposes. The students complete the survey each semester in a self-administered online platform, which allows to identify the assessments conducted by the students throughout its passage by the university. So that the database is an unbalanced panel consisting of 195.058 records obtained from 7.077 students in 17 semesters (march 2003 to 2011). The methodology of analysis incorporated Simultaneous Equation Models, which provides a number of advantages with respect to other approaches. One of the most important is that it allows the researcher to introduce a priori information and assess its inclusion, in addition to reformulate the modellings proposals with a multi-sample approach. In this research the models are estimated with MPLUS 7. Based on the findings, student perceptions of quality, service, teaching and program, impact positively the intent to recommend the university, but the student’s experience during the program moderates these relationships. Overall, the results indicate that as students advance in the program, the full effects of the perception of service quality in the overall quality of the program and in the intention to recommend the university, outweigh the effects of the perceived teaching quality. The results indicate the need for institutions of higher education to incorporate in its strategic planning the segmentation of the students according to their experience during the program.
Resumo:
El territorio chileno esta propenso, desde antes que se constituyera como nación, al impacto del comportamiento de la naturaleza que le es inherente y que también le produce daños. Está representado en los seísmos, los más dañinos. Todavía, la sociedad chilena no termina de comprender que esos daños, son parte de un desequilibrio de una convivencia armoniosa entre ella y esa naturaleza, puesto que el ser humano que vive y habita sobre ella, también lo es. Así entonces, cada vez que el territorio y su espacio son remecidos por los seísmos, la naturaleza, manifestada en la sociedad, adquiere nuevos aprendizajes para mejorar la respuesta al próximo evento. El terremoto 2010 de 8.8° Richter, fue el segundo de mayor magnitud después del otro que hasta ahora, es el más grande del planeta, y que pudo ser medido. Aquel, fue el terremoto de Valdivia de 9,5° Richter, ocurrido el 22 de mayo de 1960. Las sociedades no son estáticas, cambian, son dinámicas. Esta vez el seísmo del 2010, ocurrió en una sociedad que hace ya 35 años, adoptó un modelo de economía de libre mercado. La pobreza que tenía a 1990, era de aproximadamente, un 40%. La del 2010, de un 14%. Durante la dictadura militar hubo otro seísmo de 7,8° Richter, recién instalándose el modelo aludido. El del 2010, permite sacar conclusiones en el contexto de este modelo económico. Los resultados aparentemente son interesantes en cuanto a que hubo pocas víctimas pero por otra parte, hubo un gran daño económico. La tesis profundiza en el impacto del seísmo en la dimensión del parque habitacional construido y de la vivienda social y en los habitantes más pobres y vulnerables. Es la primera investigación sobre seísmos y vivienda social en Chile. Se asume la hipótesis que ciertas variables por una parte, y una cultura antisísmica por otra, están presentes y han penetrado en los sectores populares durante los últimos 50 años y que ello, podría estar en la base de los resultados obtenidos. Se plantea una suerte de “matrimonio bien avenido” entre el habitante y políticas públicas en vivienda. De ello, se derivan recomendaciones para mejorar los avances en el problema investigado que se contextualizan en referencia al marco teórico elaborado. Sin embargo, y no obstante lo investigado, lo ya avanzado no garantiza buenos resultados en el próximo evento, Por ello, los aprendizajes nutren a otros, nuevos, que acompañarán a la sociedad chilena en su esencia e identidad como nación. ABSTRACT Long before its establishment as an independent nation, the Chilean territory has been prone to the impact of nature, which is an inherent and damaging feature of this land. Such an impact is represented by earthquakes, which are regarded as the most damaging natural disasters. Today, the Chilean society is still unable to understand that these impacts are part of an unbalanced coexistence between individuals and nature since human beings, who live and inhabit this space, are also an element of nature. Therefore, each time this territory is hit by earthquakes, nature —represented by society— learns new lessons in order to provide a better response to future events. The 2010 earthquake, which rated 8.8 on the Richter scale, was the second largest earthquake after the most powerful earthquake ever recorded. Such an event was the Valdivia earthquake of May 22, 1960, which rated 9.5 on the Richter scale. Societies are not static as they are changing and dynamic. The 2010 earthquake took place within a context in which society operated under a free market economy model that had been running for 35 years. As of 1990, 40 per cent of the population lived in poverty; in 2010, such a figure was reduced to 14 per cent. Likewise, a magnitude 7.8 quake struck the country during the military regime period in the early days of the above model. The 2010 earthquake allows us to draw some conclusions within the context of this economic model. Results are interesting since there were few fatalities but significant economic loss. This thesis provides insights into the impact of the 2010 earthquake on the housing stock, social housing and those living in poverty and vulnerability. This is the first research on earthquakes and social housing conducted in Chile. The hypothesis is that certain variables and anti-seismic culture have permeated popular segments of the population over the last 50 years. The latter may be at the basis of the results obtained during this research. Likewise, this study proposes a certain “happy marriage” between the inhabitant and public policies on housing. The above offers some recommendations intended to further explore this issue; these suggestions are contextualized according to the theoretical framework developed in this research. However, current progress on this matter does not ensure positive results in the event of an earthquake. This is why these lessons will serve as models for future events, which are intrinsically tied to local society and Chilean identity.
Resumo:
En la Comunidad de Madrid el modelo de ocupación del territorio en las dos últimas décadas ha obedecido a factores de oferta del mercado y no a las necesidades de la población, ello provoca un consumo de suelo y de recursos que conducen a una sobrexplotación insostenible. Las metrópolis globales están experimentando rápidas e intensas transformaciones, basadas en los paradigmas emergentes de la globalización, la gobernanza, la metropolizacion y la dispersión de las actividades en el territorio y a través de ellos se abordan los planes de Londres, París y las tentativas de Madrid. La globalización provoca la pérdida de soberanía de las administraciones publicas y la competitividad entre las ciudades globales en Europa, Londres, Paris y Madrid, son centros de poder, de concentración y crecimiento donde se produce la dualización del espacio y donde la desigualdad participa de la restructuración urbana, concentración de pobreza frente a espacios de la nueva clase emergente en donde dominan los sectores de servicios y las tecnologías de la información. Frente al desarrollo urbano neoliberal de regulación a través del mercado y basada en criterios de eficiencia de la Nueva Gestión Pública, se vislumbra la posibilidad de que la sociedad se administre a si misma por medio de acciones voluntarias y responsables que promuevan los intereses colectivos mediante el reconocimiento de su propia identidad, introduciendo el concepto de gobernanza. Frente, a la explotación del territorio por parte de la sociedad extractiva que genera corrupcion, se propone un modelo de cooperación público-privada basado en la confianza mutua, un marco regulador estable, la transparencia y la información a cuyo flujo más homogéneo contribuirán sin duda las TICs. En todo este proceso, las regiones metropolitanas en Europa se erigen como motores del crecimiento, donde los límites administrativos son superados, en un territorio cada vez más extendido y donde los gobiernos locales tienen que organizarse mediante un proceso de cooperación en la provisión de servicios que ayuden a evitar los desequilibrios territoriales. El fenómeno de la dispersión urbana en desarrollos de baja densidad, los centros comerciales periféricos, la expulsión hacia la periferia de las actividades de menor valor añadido y la concentración de funciones directivas en el centro, conducen a una fragmentación del territorio en islas dependientes del automóvil y a procesos de exclusión social por la huida de las rentas altas y la expulsión de las rentas bajas de los centros urbanos. Se crean fragmentos monofuncionales y discontinuos, apoyados en las autovías, lugares carentes de identidad y generadores de despilfarro de recursos y una falta de sostenibilidad ambiental, económica y social. El estudio de la cultura de la planificación en Europa ayuda a comprender los diferentes enfoques en la ordenación del territorio y el proceso de convergencia entre las diferentes regiones. Los documentos de la UE se basan en la necesidad de la competitividad para el crecimiento europeo y la cohesión social y en relación al territorio en los desarrollos policéntricos, la resolución del dualismo campo-ciudad, el acceso equilibrado a las infraestructuras, la gestión prudente de la naturaleza, el patrimonio y el fomento de la identidad. Se proponen dos niveles de estudio uno actual, los últimos planes de Londres y Paris y el otro la evolución de las tentativas de planes en la Región madrileña siempre en relación a los paradigmas emergentes señalados y su reflejo en los documentos. El Plan de Londres es estratégico, con una visión a largo plazo, donde se confiere un gran interés al proceso, al papel del alcalde como líder y su adaptación a las circunstancias cambiantes, sujeto a las incertidumbres de una ciudad global. El desarrollo del mismo se concibe a través de la colaboración y cooperación entre las administraciones y actores. La estructura del documento es flexible, establece orientaciones y guías indicativas, para la redacción de los planes locales, no siendo las mismas vinculantes y con escasa representación grafica. El Plan de París es más un plan físico, similar al de otros centros europeos, trabaja sobre los sectores y sobre los territorios, con información extensa, con características de “Plan Latino” por la fuerza de la expresión gráfica, pero al mismo tiempo contiene una visión estratégica. Es vinculante en sus determinaciones y normativas, se plantea fomentar, pero también prohibir. Ambos planes tratan la competitividad internacional de sus centros urbanos, la igualdad social, la inclusión de todos los grupos sociales y la vivienda como una cuestión de dignidad humana. Londres plantea la gobernanza como cooperación entre sector público-privado y la necesaria cooperación con las regiones limítrofes, en París las relaciones están más institucionalizadas resaltando la colaboración vertical entre administraciones. Ambos plantean la densificación de nodos servidos por transporte público, modos blandos y el uso los TODs y la preservación de la infraestructura verde jerarquizada, la potenciación de la red azul y la mejora del paisaje de las periferias. En las “tentativas” de planes territoriales de Madrid se constata que estuvieron sujetas a los ciclos económicos. El primer Documento las DOT del año 1984, no planteaba crecimiento, ni económico ni demográfico, a medio plazo y por ello no proponía una modificación del modelo radio concéntrico. Se trataba de un Plan rígido volcado en la recuperación del medio rural, de la ciudad, el dimensionamiento de los crecimientos en función de las dotaciones e infraestructuras existentes. Aboga por la intervención de la administración pública y la promoción del pequeño comercio. Destaca el desequilibrio social en función de la renta, la marginación de determinados grupos sociales, el desequilibrio residencia/empleo y la excesiva densidad. Incide en la necesidad de viviendas para los más desfavorecidos mediante el alquiler, la promoción suelo público y la promoción del ferrocarril para dar accesibilidad al espacio central. Aboga por el equipamiento de proximidad y de pequeño tamaño, el tratamiento paisajístico de los límites urbanos de los núcleos y el control de las actividades ilegales señalando orientaciones para el planeamiento urbano. Las Estrategias (1989) contienen una visión: la modificación del modelo territorial, mediante la intervención pública a través de proyectos. Plantea la reestructuración económica del territorio, la reconversión del aparato productivo, la deslocalización de actividades de escaso valor añadido y una mayor ubicuidad de la actividad económica. Incide en la difusión de la centralidad hacia el territorio del sur, equilibrándolo con el norte, tratando de recomponer empleo y residencia, integrando al desarrollo económico las periferias entre sí y con el centro. Las actuaciones de transporte consolidarían las actuaciones, modificando el modelo radio concéntrico facilitando la movilidad mediante la red de cercanías y la intermodalidad. El plan se basaba en el liderazgo del Consejero, no integrando sectores como el medio ambiente, ni estableciendo un documento de seguimiento de las actuaciones que evaluara los efectos de las políticas y su aportación al equilibrio territorial, a través de los proyectos realizados. El Documento Preparatorio de las Bases (1995), es más de un compendio o plan de planes, recoge análisis y propuestas de los documentos anteriores y de planes sectoriales de otros departamentos. Presenta una doble estructura: un plan físico integrador clásico, que abarca los sectores y territorios, y recoge las Estrategias previas añadiendo puntos fuertes, como el malestar urbano y la rehabilitación el centro. Plantea la consecución del equilibrio ambiental mediante el crecimiento de las ciudades existentes, la vertebración territorial basada en la movilidad y en la potenciación de nuevas centralidades, la mejora de la habitabilidad y rehabilitación integral del Centro Urbano de Madrid, y la modernización del tejido productivo existente. No existe una idea-fuerza que aglutine todo el documento, parte del reconocimiento de un modelo existente concentrado y congestivo, un centro urbano dual y dos periferias al este y sur con un declive urbano y obsolescencia productiva y al oeste y norte con una dispersión que amenaza al equilibrio medioambiental. Señala como aspectos relevantes, la creciente polarización y segregación social, la deslocalización industrial, la aparición de las actividades de servicios a las empresas instaladas en las áreas metropolitanas, y la dispersión de las actividades económicas en el territorio por la banalización del uso del automóvil. Se plantea el reto de hacer ciudad de la extensión suburbana y su conexión con el sistema metropolitano, mediante una red de ciudades integrada y complementaria, en búsqueda de un mayor equilibrio y solidaridad territorial. Las Bases del PRET (1997) tenían como propósito iniciar el proceso de concertación en que debe basarse la elaboración del Plan. Parte de la ciudad mediterránea compacta, y diversa, y de la necesidad de que las actividades económicas, los servicios y la residencia estén en proximidad, resolviéndolo mediante una potente red de transporte público que permitiese una accesibilidad integrada al territorio. El flujo de residencia hacia la periferia, con un modelo ajeno de vivienda unifamiliar y la concentración del empleo en el centro producen desequilibrio territorial. Madrid manifiesta siempre apostó por la densificación del espacio central urbanizado, produciendo su congestión, frente al desarrollo de nuevos suelos que permitieran su expansión territorial. Precisa que es necesario preservar los valores de centralidad de Madrid, como generador de riqueza, canalizando toda aquella demanda de centralidad, hacia espacios más periféricos. El problema de la vivienda no lo ve solo como social, sino como económico, debido a la pérdida de empleos que supone su paralización. Observa ya los crecimientos residenciales en el borde de la region por el menor valor del suelo. Plantea como la política de oferta ha dado lugar a un modelo de crecimiento fragmentado, desequilibrado, desestructurado, con fuertes déficits dotacionales y de equipamiento, que inciden en la segregación espacial de las rentas, agravando el proceso de falta de identidad morfológica y de desarraigo de los valores urbanos. El plan señalaba que la presión sobre el territorio creaba su densificación por las limitaciones de espacio, Incidía en limitar el peso de la intervención pública, no planteando propuestas de cooperación público-privado. La mayor incoherencia estriba en que los objetivos eran innovadores y coinciden en su mayoría con las propuestas estudiadas de Londres o Paris, pero se intentan implementar a través de un cambio hacia un modelo reticulado homogéneo, expansivo sobre el territorio, que supone un consumo de suelo y de infraestructuras para solucionar un problema inexistente, la gestión de la densidad. Durante las dos últimas décadas en ausencia de un plan regional, la postura neoliberal fue la de un exclusivo control de legalidad del planeamiento, los municipios entraron en un proceso de competencia para aprovechar las iniciales ventajas económicas de los crecimientos detectados, que proporcionaban una base económica “sólida” a unos municipios con escasos recursos en sus presupuestos municipales. La legislación se modifica a requerimiento de grupos interesados, no existiendo un marco estable. Se pierde la figura del plan no solo a nivel regional, si no en los sectores y el planeamiento municipal donde los municipios tiende a basarse en modificaciones puntuales con la subsiguiente pérdida del modelo urbanístico. La protección ambiental se estructura mediante un extenso nivel de figuras, con diversidad de competencias que impide su efectiva protección y control. Este proceso produce un despilfarro en la ocupación del suelo, apoyada en las infraestructuras viarias, y un crecimiento disperso y de baja densidad, cada vez más periférico, produciéndose una segmentación social por dualización del espacio en función de niveles de renta. Al amparo del boom inmobiliario, se produce una falta de política social de vivienda pública, más basada en la dinamización del mercado con producción de viviendas para rentas medias que en políticas de alquiler para determinados grupos concentrándose estas en los barrios desfavorecidos y en la periferia sur. Se produce un incremento de la vivienda unifamiliar, muchas veces amparada en políticas públicas, la misma se localiza en el oeste principalmente, en espacios de valor como el entorno del Guadarrama o con viviendas más baratas por la popularización de la tipología en la frontera de la Región. El territorio se especializa a modo de islas monofuncionales, las actividades financieras y de servicios avanzados a las empresas se localizan en el norte y oeste próximo, se pierde actividad industrial que se dispersa más al sur, muchas veces fuera de la región. Se incrementan los grandes centros comerciales colgados de las autovías y sin población en su entorno. Todo este proceso ha provocado una pérdida de utilización del transporte público y un aumento significativo del uso del vehículo privado. En la dos últimas décadas se ha producido en la región de Madrid desequilibrio territorial y segmentación social, falta de implicación de la sociedad en el territorio, dispersión del crecimiento y un incremento de los costes ambientales, sociales y económicos, situación, que solo, a través del uso racional del territorio se puede reconducir, apoyado en una planificación integrada sensible y participativa. ABSTRACT In Madrid the model of land occupation in the past two decades has been driven by market supply factors rather than the needs of the population. This results in a consumption of land and resources that leads to unsustainable overexploitation. Addressing this issue must be done through sensitive and participatory integrated planning. Global cities are experiencing rapid and intense change based on the emerging paradigms of globalization, governance, metropolization and the dispersion of activities in the territory. Through this context, a closer look will be taken at the London and Paris plans as well as the tentative plans of Madrid. Globalization causes the loss of state sovereignty and the competitiveness among global cities in Europe; London, Paris and Madrid. These are centres of power, concentration and growth where the duality of space is produced, and where inequality plays a part in urban restructuration. There are concentrated areas of poverty versus areas with a new emerging class where the services sector and information technologies are dominant. The introduction of ICTs contributes to a more homogeneous flow of information leading, us to the concept of governance. Against neoliberal urban development based on free market regulations and efficiency criteria as established by the “New Public Management”, emerge new ways where society administers itself through voluntary and responsible actions to promote collective interests by recognizing their own identity. A new model of public-private partnerships surfaces that is based on mutual trust, transparency, information and a stable regulatory framework in light of territorial exploitation by the “extractive society” that generates corruption. Throughout this process, European metropolitan regions become motors of growth where administrative boundaries are overcome in an ever expanding territory where government is organized through cooperative processes to provide services that protect against regional imbalances. Urban sprawl or low-density development as seen in peripheral shopping centres, the off-shoring of low added-value activities to the periphery, and the concentration of business and top management functions in the centre, leads to a fragmentation of the territory in automobile dependent islands and a process of social exclusion brought on by the disappearance of high incomes. Another effect is the elimination of low income populations from urban centres. In consequence, discontinuous expansions and mono-functional places that lack identity materialize supported by a highway network and high resource consumption. Studying the culture of urban planning in Europe provides better insight into different approaches to spatial planning and the process of convergence between different regions. EU documents are based on the need of competitiveness for European growth and social cohesion. In relation to polycentric development territory they are based on a necessity to solve the dualism between field and city, balanced access to infrastructures, prudent management of nature and solidifying heritage and identity Two levels of study unfold, the first being the current plans of London and Île-de-France and the second being the evolution of tentative plans for the Madrid region as related to emerging paradigms and how this is reflected in documents. The London Plan is strategic with a long-term vision that focuses on operation, the role of the mayor as a pivotal leader, and the adaptability to changing circumstances brought on by the uncertainties of a global city. Its development is conceived through collaboration and cooperation between governments and stakeholders. The document structure is flexible, providing guidance and indicative guidelines on how to draft local plans so they are not binding, and it contains scarce graphic representation. The Plan of Paris takes on a more physical form and is similar to plans of other European centres. It emphasizes sectors and territories, using extensive information, and is more characteristic of a “Latin Plan” as seen in its detailed graphic expression. However, it also contains a strategic vision. Binding in its determinations and policy, it proposes advancement but also prohibition. Both plans address the international competitiveness of urban centres, social equality, inclusion of all social groups and housing as issues of human dignity. London raises governance and cooperation between public and private sector and the need for cooperation with neighbouring regions. In Paris, the relations are more institutionalized highlighting vertical collaboration between administrations. Both propose nodes of densification served by public transportation, soft modes and the use of TOD, the preservation of a hierarchical green infrastructure, and enhancing the landscape in urban peripheries. The tentative territorial plans for the Madrid region provide evidence that they were subject to economic cycles. The first document of master guidelines (1984) does not address either economic or demographic growth in the mid term and therefore does not propose the modification of the radio-concentric model. It is a rigid plan focused on rural and urban recovery and the dimensioning of growth that depends on endowments and infrastructures. It advocates government intervention and promotes small business. The plan emphasizes social imbalance in terms of income, marginalization of certain social groups, the imbalance of residence/employment and excessive density. It stresses the need for social rent housing for the underprivileged, promotes public land, and the supports rail accessibility to the central area. It backs facilities of proximity and small size, enhancing the landscaping of city borders, controlling illegal activities and draws out guidelines for urban planning. The strategies (1989) contain a vision: Changing the territorial model through public intervention by means of projects. They bring to light economic restructuring of territory, the reconversion of the productive apparatus, relocation of low value-added activities, and greater ubiquity of economic activity. They also propose the diffusion of centrality towards southern territories, balancing it with the north in an attempt to reset employment and residence that integrates peripheral economic development both in the periphery and the centre. Transport would consolidate the project, changing the radius-concentric model and facilitating mobility through a commuter and inter-modality network. The plan derives itself from the leadership of the minister and does not integrate sectors such as environment. It also does not incorporate the existence of a written document that monitors performance to evaluate the effects of policies and their contribution to the territorial balance. The Preparatory Document of the Bases, (1995) is more a compendium, or plan of plans, that compiles analysis and proposals from previous documents and sectorial plans from other departments. It has a dual structure: An integrating physical plan covering the sectors and territories that includes the previous strategies while adding some strengths. One such point is the urban discomfort and the rehabilitation of the centre. It also poses the achievement of environmental balance through the growth of existing cities, the territorial linking based on mobility, strengthening new centres, improving the liveability and comprehensive rehabilitation of downtown Madrid, and the modernization of the existing production network. There is no one powerful idea that binds this document. This is due to the recognition of an existing concentrate and congestive model, a dual urban centre, two eastern and southern suburbs suffering from urban decay, and an obsolescent productive north and west whose dispersion threatens the environmental balance. Relevant aspects the document highlights are increasing polarization and social segregation, industrial relocation, the emergence of service activities to centralized companies in metropolitan areas and the dispersion of economic activities in the territory by the trivialization of car use. It proposes making the city from the suburban sprawl and its connection to the metropolitan system through a network of integrated and complementary cities in search of a better balance and territorial solidarity. The Bases of PRET (1997) aims to start the consultation process that must underpin the development of the plan. It stems from a compact and diverse Mediterranean city along with the need for economic activities, services and residences that are close. To resolve the issue, it presents a powerful network of public transport that allows integrated accessibility to the territory. The flow of residence to the periphery based on a foreign model of detached housing and an employment concentration in the centre produces territorial imbalance. Madrid always opted for the densification of the central space, producing its congestion, against the development of new land that would allow its territorial expansion. The document states that the necessity to preserve the values of the housing problem is not only viewed as social, but also economic due to the loss of jobs resulting from their paralysis. It notes the residential growth in the regional border due to the low price of land and argues that the policy of supply has led to a fragmented model of growth that is unbalanced, unstructured, with strong infrastructure and facility deficits that affect the spatial segregation of income and aggravate the lack of morphological identity, uprooting urban values. The pressure on the territory caused its densification due to space limitation; the proposed grid model causes land consumption and infrastructure to solve a non-problem, density. Focusing on limiting the weight of public intervention, it does not raise proposals for public-private cooperation. The biggest discrepancy is that the targets were innovative and mostly align with the proposals in London and Paris. However, it proposes to be implemented through a shift towards a uniform gridded model that is expansive over territory. During the last two decades, due to the absence of a regional plan, a neoliberal stance held exclusive control of the legality of urban planning. The municipalities entered a competition process to take advantage of initial economic benefits of such growth. This provided a “solid” economic base for some municipalities with limited resources in their municipal budgets. The law was amended without a legal stable framework at the request of stakeholders. The character of the plan is lost not only regionally, but also in the sectors and municipal planning. This tends to be based on specific changes with the loss of an urban model. Environmental protection is organized through an extensive number of protection figures with diverse competencies that prevent its effective protection. This process squanders the use of the land, backed by increasing road infrastructure, dispersed occupations with low-density growth causing a social segmentation due to space duality based on income levels. During the housing boom, there is a reduction in social public housing policy mostly due to a boost in the market of housing production for average incomes than in rental policies for needy social groups that focus on disadvantaged neighbourhoods and southern suburbs. As a result, there is an increase in single-family housing, often protected by public policy. This is located primarily in the west in areas of high environmental value such as Guadarrama. There is also cheaper housing due to the popularization of typology in the border region. There, territory works as a mono-functional islands. Financial activities and advanced services for companies are located to the north and west where industrial activity is lost as it migrates south, often outside the region. The number of large shopping centres hanging off the highway infrastructure with little to no surrounding population increases. This process leads to the loss of dependency on public transport and a significant increase in the use of private vehicles. The absence of regional planning has produced more imbalance, more social segmentation, more dispersed growth and a lot of environmental, social and economic costs that can only be redirected through rational territorial.
Resumo:
It is commonly accepted that pathways that regulate proliferation/differentiation processes, if altered in their normal interplay, can lead to the induction of programmed cell death. In a previous work we reported that Polyoma virus Large Tumor antigen (PyLT) interferes with in vitro terminal differentiation of skeletal myoblasts by binding and inactivating the retinoblastoma antioncogene product. This inhibition occurs after the activation of some early steps of the myogenic program. In the present work we report that myoblasts expressing wild-type PyLT, when subjected to differentiation stimuli, undergo cell death and that this cell death can be defined as apoptosis. Apoptosis in PyLT-expressing myoblasts starts after growth factors removal, is promoted by cell confluence, and is temporally correlated with the expression of early markers of myogenic differentiation. The block of the initial events of myogenesis by transforming growth factor β or basic fibroblast growth factor prevents PyLT-induced apoptosis, while the acceleration of this process by the overexpression of the muscle-regulatory factor MyoD further increases cell death in this system. MyoD can induce PyLT-expressing myoblasts to accumulate RB, p21, and muscle- specific genes but is unable to induce G00 arrest. Several markers of different phases of the cell cycle, such as cyclin A, cdk-2, and cdc-2, fail to be down-regulated, indicating the occurrence of cell cycle progression. It has been frequently suggested that apoptosis can result from an unbalanced cell cycle progression in the presence of a contrasting signal, such as growth factor deprivation. Our data involve differentiation pathways, as a further contrasting signal, in the generation of this conflict during myoblast cell apoptosis.
Resumo:
We describe a fluorescence-based directed termination PCR (fluorescent DT–PCR) that allows accurate determination of actual sequence changes without dideoxy DNA sequencing. This is achieved using near infrared dye-labeled primers and performing two PCR reactions under low and unbalanced dNTP concentrations. Visualization of resulting termination fragments is accomplished with a dual dye Li-cor DNA sequencer. As each DT–PCR reaction generates two sets of terminating fragments, a pair of complementary reactions with limiting dATP and dCTP collectively provide information on the entire sequence of a target DNA, allowing an accurate determination of any base change. Blind analysis of 78 mutants of the supF reporter gene using fluorescent DT–PCR not only correctly determined the nature and position of all types of substitution mutations in the supF gene, but also allowed rapid scanning of the signature sequences among identical mutations. The method provides simplicity in the generation of terminating fragments and 100% accuracy in mutation characterization. Fluorescent DT–PCR was successfully used to generate a UV-induced spectrum of mutations in the supF gene following replication on a single plate of human DNA repair-deficient cells. We anticipate that the automated DT–PCR method will serve as a cost-effective alternative to dideoxy sequencing in studies involving large-scale analysis for nucleotide sequence changes.
Resumo:
The abundant chromosome abnormalities in most carcinomas are probably a reflection of genomic instability present in the tumor, so the pattern and variability of chromosome abnormalities will reflect the mechanism of instability combined with the effects of selection. Chromosome rearrangement was investigated in 17 colorectal carcinoma-derived cell lines. Comparative genomic hybridization showed that the chromosome changes were representative of those found in primary tumors. Spectral karyotyping (SKY) showed that translocations were very varied and mostly unbalanced, with no translocation occurring in more than three lines. At least three karyotype patterns could be distinguished. Some lines had few chromosome abnormalities: they all showed microsatellite instability, the replication error (RER)+ phenotype. Most lines had many chromosome abnormalities: at least seven showed a surprisingly consistent pattern, characterized by multiple unbalanced translocations and intermetaphase variation, with chromosome numbers around triploid, 6–16 structural aberrations, and similarities in gains and losses. Almost all of these were RER−, but one, LS411, was RER+. The line HCA7 showed a novel pattern, suggesting a third kind of genomic instability: multiple reciprocal translocations, with little numerical change or variability. This line was also RER+. The coexistence in one tumor of two kinds of genomic instability is to be expected if the underlying defects are selected for in tumor evolution.
Resumo:
Reef-building corals and other tropical anthozoans harbor endosymbiotic dinoflagellates. It is now recognized that the dinoflagellates are fundamental to the biology of their hosts, and their carbon and nitrogen metabolisms are linked in important ways. Unlike free living species, growth of symbiotic dinoflagellates is unbalanced and a substantial fraction of the carbon fixed daily by symbiont photosynthesis is released and used by the host for respiration and growth. Release of fixed carbon as low molecular weight compounds by freshly isolated symbiotic dinoflagellates is evoked by a factor (i.e., a chemical agent) present in a homogenate of host tissue. We have identified this "host factor" in the Hawaiian coral Pocillopora damicornis as a set of free amino acids. Synthetic amino acid mixtures, based on the measured free amino acid pools of P. damicornis tissues, not only elicit the selective release of 14C-labeled photosynthetic products from isolated symbiotic dinoflagellates but also enhance total 14CO2 fixation.
Resumo:
A Constituição Federal Brasileira institucionalizou o direito a saúde no Brasil, o artigo 196 que diz: A saúde é um direito de todos e um dever do Estado apresenta esse direito. Ao regulamentar a criação do Sistema Único de Saúde a lei 8.080 reafirma a obrigação do Estado com a Saúde da população. Dentro desse contexto a Assistência Farmacêutica (AF) tem importante papel de garantir medicamentos seguros, eficácias, em tempo e quantidade necessária para atender a demanda dos cidadãos, porém apesar das constantes atualizações em prol de promover maior eficiência dos processos da AF, ainda acontecem situações em que o paciente não tem o medicamento requerido, seja por falta nas unidades dispensadoras ou a não presença nas listas de medicamentos padronizados. Essa situação faz com que o cidadão recorra à via judicial na tentativa de garantir o acesso ao medicamento pleiteado, fenômeno conhecido como judicialização da saúde, que traz grandes implicações sobre a gestão da assistência farmacêutica. Diante disso o objetivo do trabalho foi descrever o panorama geral das ações judiciais pleiteando medicamentos e insumos para insulina que foram assumidos pela prefeitura de Ribeirão Preto. Para alcançar esses objetivos, foi realizado um estudo do tipo descritivo. Foram analisados ao todo 1861 processos judiciais sendo 1083 ainda ativos e 778 que já haviam sido encerrados. Na maioria dos processos o juiz dava como prazo máximo 30 dias (99%) para se cumprir a ação, o que é insuficiente para realizar uma licitação pública obrigando a gestão a utilizar via paralela de compra. O Ministério Público foi o principal representante legal (71,7%) utilizado e a maioria das prescrições foram advindas de hospitais e clínicas particulares (50,1%). Os principais diagnósticos referidos nas ações foram diabetes e o transtorno de déficit de atenção e hiperatividade (TDAH). Já os medicamentos mais prevalentes foram as insulinas e o metilfenidato. Dentre os médicos prescritores 3% somam aproximadamente 30% das prescrições. Diante dos resultados expostos, o presente estudo evidenciou o impacto da judicialização da saúde no município de Ribeirão Preto, demandando da gestão pública organização estrutural e financeira para lidar com as demandas judiciais.
Resumo:
Este estudo teve como objetivos (a) identificar mecanismos pelos quais rearranjos cromossômicos citogeneticamente equilibrados possam estar associados de maneira causal a determinados quadros clínicos e (b) contribuir para a compreensão dos mecanismos de formação desses rearranjos. Para isso, foram estudados 45 rearranjos cromossômicos citogeneticamente equilibrados (29 translocações, 10 inversões e seis rearranjos complexos), detectados em pacientes que apresentavam malformações congênitas, comprometimento do desenvolvimento neuropsicomotor ou déficit intelectual. Foram 31 rearranjos cromossômicos esporádicos, três familiais que segregavam com o quadro clínico e mais 11 rearranjos cromossômicos herdados de genitores fenotipicamente normais. Inicialmente os pontos de quebra desses rearranjos foram mapeados por hibridação in situ fluorescente (FISH). A busca por microdeleções e duplicações genômicas foi realizada por a-CGH. A investigação dos pontos de quebra prosseguiu com a aplicação da técnica de Mate-Pair Sequencing (MPS), que permite localizar as quebras em segmentos de 100 pb - 1 kb, na maioria dos casos. Para obter os segmentos de junção das quebras no nível de pares de bases, os segmentos delimitados por MPS foram sequenciados pelo método de Sanger. A análise por aCGH revelou microdeleções ou microduplicações localizadas nos cromossomos rearranjados, em 12 dos 45 pacientes investigados (27%). A análise de 27 rearranjos por MPS permitiu a caracterização dos pontos de junção das quebras. MPS expandiu o número de pontos de quebra, detectados por análise do cariótipo ou aCGH, de 114 para 156 (em resolução < 2kb, na maioria dos casos). O número de pontos de quebra/rearranjo variou de 2 a 20. Os 156 pontos de quebra resultaram em 86 variantes estruturais equilibradas e outras 32 variantes não equilibradas. Perdas e ganhos de segmentos submiscroscópicos nos cromossomos rearranjados constituíram a principal causa ou, provavelmente, contribuíram para o quadro clínico de 12 dos 45 pacientes. Em cinco desses 12 rearranjos foram detectadas por MPS a interrupção de genes já relacionados à doença, ou provável alteração de sua região reguladora, contribundo para o quadro clínico. Em quatro dos 33 rearranjos não associados a perdas ou ganhos de segmentos, a análise por MPS revelou a interrupção de genes que já foram anteriormente relacionados a doenças, explicando-se, assim, as características clínicas dos portadores; outro rearranjo pode ter levando alteração da expressão gênica de gene sensível a dosagem e ao quadro clínico. Um rearranjo cromossômico familial, identificado na análise após bandamento G como uma translocação equilibrada, t(2;22)(p14;q12), segregava com quadro de atraso do desenvolvimento neuropsicomotor e dificuldade de aprendizado associados a dismorfismos. A combinação das análises por FISH, aCGH e MPS revelou que se tratava, na verdade, de rearranjo complexo entre os cromossomos 2, 5 e 22, incluindo 10 quebras. A segregação de diferentes desequilíbrios submicroscópicos em indivíduos afetados e clinicamente normais permitiu a compreensão da variabilidade clínica observada na família. Rearranjos equilibrados detectados em indivíduos afetados, mas herdados de genitores clinicamente normais, são, em geral, considerados como não tendo relação com o quadro clínico, apesar da possibilidade de desequilíbrios cromossômicos gerados por permuta desigual na meiose do genitor portador do rearranjo. Neste trabalho, a investigação de 11 desses rearranjos por aCGH não revelou perdas ou ganhos de segmentos nos cromossomos rearranjados. No entanto, a análise por aCGH da portadora de um desses rearranjos - inv(12)mat - revelou deleção de 8,7 Mb no cromossomo 8, como causa de seu fenótipo clínico. Essa deleção estava relacionada com outro rearranjo equilibrado também presente em sua mãe, independente da inversão. Para compreender os mecanismos de formação de rearranjos citogeneticamente equilibrados, investigamos os segmentos de junção no nível de pares de base. A análise por MPS que levou, na maioria dos casos, ao mapeamento dos pontos de quebras em segmentos <1kb permitiu o sequenciamento pelo método de Sanger de 51 segmentos de junções de 17 rearranjos. A ocorrência de blunt fusions ou inserções e deleções <10 pb, e a ausência de homologia ou a presença de micro homologia de 2 pb a 4 pb de extensão indicaram o mecanismo de junção de extremidades não homólogas (non-homologous end joinging; NHEJ), na maioria das 51 junções caracterizadas. As características de três dos quatro rearranjos mais complexos, com 17-20 quebras, indicaram sua formação pelo mecanismo de chromothripsis. Este estudo mostra a importância da análise genômica de variações de número de cópias por microarray, juntamente com o mapeamento dos pontos de quebra por MPS, para determinar a estrutura de rearranjos cromossômicos citogeneticamente equilibrados e seu impacto clínico. O mapeamento dos segmentos de junção por MPS, permitindo o sequenciamento pelo método de Sanger, foi essencial para a compreensão de mecanismos de formação desses rearranjos
Resumo:
As análises biplot que utilizam os modelos de efeitos principais aditivos com inter- ação multiplicativa (AMMI) requerem matrizes de dados completas, mas, frequentemente os ensaios multiambientais apresentam dados faltantes. Nesta tese são propostas novas metodologias de imputação simples e múltipla que podem ser usadas para analisar da- dos desbalanceados em experimentos com interação genótipo por ambiente (G×E). A primeira, é uma nova extensão do método de validação cruzada por autovetor (Bro et al, 2008). A segunda, corresponde a um novo algoritmo não-paramétrico obtido por meio de modificações no método de imputação simples desenvolvido por Yan (2013). Também é incluído um estudo que considera sistemas de imputação recentemente relatados na literatura e os compara com o procedimento clássico recomendado para imputação em ensaios (G×E), ou seja, a combinação do algoritmo de Esperança-Maximização com os modelos AMMI ou EM-AMMI. Por último, são fornecidas generalizações da imputação simples descrita por Arciniegas-Alarcón et al. (2010) que mistura regressão com aproximação de posto inferior de uma matriz. Todas as metodologias têm como base a decomposição por valores singulares (DVS), portanto, são livres de pressuposições distribucionais ou estruturais. Para determinar o desempenho dos novos esquemas de imputação foram realizadas simulações baseadas em conjuntos de dados reais de diferentes espécies, com valores re- tirados aleatoriamente em diferentes porcentagens e a qualidade das imputações avaliada com distintas estatísticas. Concluiu-se que a DVS constitui uma ferramenta útil e flexível na construção de técnicas eficientes que contornem o problema de perda de informação em matrizes experimentais.
Resumo:
Globally, increasing demands for biofuels have intensified the rate of land-use change (LUC) for expansion of bioenergy crops. In Brazil, the world\'s largest sugarcane-ethanol producer, sugarcane area has expanded by 35% (3.2 Mha) in the last decade. Sugarcane expansion has resulted in extensive pastures being subjected to intensive mechanization and large inputs of agrochemicals, which have direct implications on soil quality (SQ). We hypothesized that LUC to support sugarcane expansion leads to overall SQ degradation. To test this hypothesis we conducted a field-study at three sites in the central-southern region, to assess the SQ response to the primary LUC sequence (i.e., native vegetation to pasture to sugarcane) associated to sugarcane expansion in Brazil. At each land use site undisturbed and disturbed soil samples were collected from the 0-10, 10-20 and 20-30 cm depths. Soil chemical and physical attributes were measured through on-farm and laboratory analyses. A dataset of soil biological attributes was also included in this study. Initially, the LUC effects on each individual soil indicator were quantified. Afterward, the LUC effects on overall SQ were assessed using the Soil Management Assessment Framework (SMAF). Furthermore, six SQ indexes (SQI) were developed using approaches with increasing complexity. Our results showed that long-term conversion from native vegetation to extensive pasture led to soil acidification, significant depletion of soil organic carbon (SOC) and macronutrients [especially phosphorus (P)] and severe soil compaction, which creates an unbalanced ratio between water- and air-filled pore space within the soil and increases mechanical resistance to root growth. Conversion from pasture to sugarcane improved soil chemical quality by correcting for acidity and increasing macronutrient levels. Despite those improvements, most of the P added by fertilizer accumulated in less plant-available P forms, confirming the key role of organic P has in providing available P to plants in Brazilian soils. Long-term sugarcane production subsequently led to further SOC depletions. Sugarcane production had slight negative impacts on soil physical attributes compared to pasture land. Although tillage performed for sugarcane planting and replanting alleviates soil compaction, our data suggested that the effects are short-term with persistent, reoccurring soil consolidation that increases erosion risk over time. These soil physical changes, induced by LUC, were detected by quantitative soil physical properties as well as by visual evaluation of soil structure (VESS), an on-farm and user-friendly method for evaluating SQ. The SMAF efficiently detected overall SQ response to LUC and it could be reliably used under Brazilian soil conditions. Furthermore, since all of the SQI values developed in this study were able to rank SQ among land uses. We recommend that simpler and more cost-effective SQI strategies using a small number of carefully chosen soil indicators, such as: pH, P, K, VESS and SOC, and proportional weighting within of each soil sectors (chemical, physical and biological) be used as a protocol for SQ assessments in Brazilian sugarcane areas. The SMAF and SQI scores suggested that long-term conversion from native vegetation to extensive pasture depleted overall SQ, driven by decreases in chemical, physical and biological indicators. In contrast, conversion from pasture to sugarcane had no negative impacts on overall SQ, mainly because chemical improvements offset negative impacts on biological and physical indicators. Therefore, our findings can be used as scientific base by farmers, extension agents and public policy makers to adopt and develop management strategies that sustain and/or improving SQ and the sustainability of sugarcane production in Brazil.
Resumo:
A presente pesquisa tem como objetivo compreender a dinâmica de comportamento do solo sob escala macro e micromorfológica visualizados em topossequência, no que concerne aos agentes morfológicos que condicionam e contribuem para deflagração de processos erosivos. A área de estudo está inserida na sub-bacia hidrográfica do Laranja Azeda localizada na região centro-leste do estado de São Paulo, no município de São Carlos/SP, e têm fundamental importância por pertencer à bacia hidrográfica do Ribeirão Feijão, importante manancial urbano para a cidade. O planejamento de uso e ocupação adequados aos fatores físicos que compõe a dinâmica desta paisagem são essenciais visando a conservação e preservação dos recursos hídricos ali existentes, onde a expressiva ocorrência de processos erosivos são objetos de preocupação, já que estes podem causar assoreamento de rios e reservatórios. Utilizando uma metodologia multiescalar para seleção da área de pesquisa em detalhe e compreensão da organização e dinâmica da cobertura pedológica, foram utilizados os procedimentos propostos pela Análise Estrutural da Cobertura Pedológica e conceitos e técnicas da micromorfologia de solos. Verifica-se que a distribuição dos solos na Topossequência Manacá está estritamente correlacionada à transformação vertical do materialde origem em solo, em cuja vertente existe uma diferenciação litológica que condiciona a morfologia diferenciada, tanto em escala macromorfológica quanto micromorfológica. O terço superior e médio da vertente está associado à depósitos colúvio-eluvionaresda Formação Itaqueri, onde desenvolve-se um Latossolo Vermelho Amarelo. Já o terço inferior da vertente corresponde a um solo formado a partir dos arenitos da Formação Botucatu, sendo enquadrado enquanto Neossolo Quartzarênico. Com o auxílio técnicas de análise bidimensional de imagens retiradas das lâminas delgadas de solo, foi possível visualizar e quantificar a macroposidade ao longo da vertente, importante atributo morfológico que controla os fluxos de água e são agentes condicionantes para o desenvolvimento de processos erosivos. Conclui-se que a ocorrência de voçorocas no terço médio inferior da vertente é a materialização em forma de processos erosivos deste comportamento diferencial da massa do solo, onde portanto, na Topossequência Manacá a busca de equilíbrio dinâmico na vertente é induzida pela dinâmica genética evolutiva das formações geológicas que sustentam a paisagem, desencadeada em processos erosivos que tendem a progredir em desequilíbrio, a depender do manejo estabelecido para o local.