969 resultados para Static-order-trade-off


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Examina as diversas abordagens teóricas em defesa da relevância da estrutura de capital para a determinação do valor da empresa. Preocupa-se especificamente em identificar quais são os fatores determinantes da esco1ha da estrutura de capital das empresas não-financeiras no Brasil, procurando, quando possível, encontrar evidências de que tal escolha possa ser explicada pelas hipóteses do "static trade-off'' e/ou da "pecking order" modificada

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tese analisa três lacunas que ainda não foram discutidas na literatura. As lacunas dizem respeito aos efeitos da internacionalização na estrutura de capital e de propriedade de empresas multinacionais. Para tanto, três ensaios sobre os efeitos da internacionalização foram elaborados. O primeiro analisou os efeitos dos modos de entrada sobre o endividamento; o segundo os efeitos da internacionalização nas formas de endividamento e o terceiro os efeitos da internacionalização na estrutura de propriedade. A partir de dados de empresas multinacionais e domésticas de capital aberto latino-americanas de 2007 a 2011, foram montados painéis de dados com variáveis de teste e controle para cada ensaio. Os resultados do primeiro ensaio apontam que os modos de entrada são relevantes para determinar o nível de endividamento das multinacionais e complementam os resultados de trabalhos baseados na hipótese upstream-downstream. Evidenciou-se também que empresas com modos de entrada patrimoniais tendem a ser mais endividadas no longo prazo e total que empresas com modos de entrada não-patrimoniais, reforçando explicações dadas pela teoria do static trade-off; e menos endividadas no curto prazo, reforçando as explicações das teorias da agência e pecking order. Os resultados do segundo ensaio apontam que o grau de internacionalização: (i) aumenta o nível de endividamento da quase totalidade das dívidas providas por agentes financeiros (bancos), conforme a hipótese upstream-downstream e diminuem as dívidas providas por agentes não-financeiros (trade-credit), conforme teoria da restrição financeira; (ii) não produz efeitos sobre a maturidade das dívidas (iii) diferente do que esperava, não aumenta endividamento via banco nacional de desenvolvimento (empréstimos incentivados), (iv) foi relevante para alterar a composição do endividamento das multinacionais se comparada às empresas domésticas e (v) a forma de entrada na internacionalização não afeta a composição de endividamento. No terceiro ensaio o principal resultado encontrado foi que as empresas mais internacionalizadas e com modo de entrada patrimonial têm menores níveis de concentração de propriedade. Outro importante resultado do terceiro ensaio foi de que há uma simultaneidade ainda não explorada em estudos anteriores na determinação do nível de internacionalização e da concentração de propriedade. Ambos os resultados do terceiro ensaio estão amparados pela Visão Baseada em Recursos contrapondo a visão tradicional da teoria da agência.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study aims to investigate the influence of the asset class and the breakdown of tangibility as determinant factors of the capital structure of companies listed on the BM & FBOVESPA in the period of 2008-2012. Two current assets classes were composed and once they were grouped by liquidity, they were also analyzed by the financial institutions for credit granting: current resources (Cash, Bank and Financial Applications) and operations with duplicates (Stocks and Receivables). The breakdown of the tangible assets was made based on its main components provided as warrantees for loans like Machinery & Equipment and Land & Buildings. For an analysis extension, three metrics for leverage (accounting, financial and market) were applied and the sample was divided into economic sectors, adopted by BM&FBOVESPA. The data model in dynamic panel estimated by a systemic GMM of two levels was used in this study due its strength to problems of endogenous relationship as well as the omitted variables bias. The found results suggest that current resources are determinants of the capital structure possibly because they re characterized as proxies for financial solvency, being its relationship with debt positive. The sectorial analysis confirmed the results for current resources. The tangibility of assets has inverse proportional relationship with the leverage. As it is disintegrated in its main components, the significant and negative influence of machinery & equipment was more marked in the Industrial Goods sector. This result shows that, on average, the most specific assets from operating activities of a company compete for a less use of third party resources. As complementary results, it was observed that the leverage has persistence, which is linked with the static trade-off theory. Specifically for financial leverage, it was observed that the persistence is relevant when it is controlled for the lagged current assets classes variables. The proxy variable for growth opportunities, measured by the Market -to -Book, has the sign of its contradictory coefficient. The company size has a positive relationship with debt, in favor of static trade-off theory. Profitability is the most consistent variable in all the performed estimations, showing strong negative and significant relationship with leverage, as the pecking order theory predicts

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main concern in Wireless Sensor Networks (WSN) algorithms and protocols are the energy consumption. Thus, the WSN lifetime is one of the most important metric used to measure the performance of the WSN approaches. Another important metric is the WSN spatial coverage, where the main goal is to obtain sensed data in a uniform way. This paper has proposed an approach called (m,k)-Gur Game that aims a trade-off between quality of service and the increasement of spatial coverage diversity. Simulation results have shown the effectiveness of this approach. © 2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Images of a scene, static or dynamic, are generally acquired at different epochs from different viewpoints. They potentially gather information about the whole scene and its relative motion with respect to the acquisition device. Data from different (in the spatial or temporal domain) visual sources can be fused together to provide a unique consistent representation of the whole scene, even recovering the third dimension, permitting a more complete understanding of the scene content. Moreover, the pose of the acquisition device can be achieved by estimating the relative motion parameters linking different views, thus providing localization information for automatic guidance purposes. Image registration is based on the use of pattern recognition techniques to match among corresponding parts of different views of the acquired scene. Depending on hypotheses or prior information about the sensor model, the motion model and/or the scene model, this information can be used to estimate global or local geometrical mapping functions between different images or different parts of them. These mapping functions contain relative motion parameters between the scene and the sensor(s) and can be used to integrate accordingly informations coming from the different sources to build a wider or even augmented representation of the scene. Accordingly, for their scene reconstruction and pose estimation capabilities, nowadays image registration techniques from multiple views are increasingly stirring up the interest of the scientific and industrial community. Depending on the applicative domain, accuracy, robustness, and computational payload of the algorithms represent important issues to be addressed and generally a trade-off among them has to be reached. Moreover, on-line performance is desirable in order to guarantee the direct interaction of the vision device with human actors or control systems. This thesis follows a general research approach to cope with these issues, almost independently from the scene content, under the constraint of rigid motions. This approach has been motivated by the portability to very different domains as a very desirable property to achieve. A general image registration approach suitable for on-line applications has been devised and assessed through two challenging case studies in different applicative domains. The first case study regards scene reconstruction through on-line mosaicing of optical microscopy cell images acquired with non automated equipment, while moving manually the microscope holder. By registering the images the field of view of the microscope can be widened, preserving the resolution while reconstructing the whole cell culture and permitting the microscopist to interactively explore the cell culture. In the second case study, the registration of terrestrial satellite images acquired by a camera integral with the satellite is utilized to estimate its three-dimensional orientation from visual data, for automatic guidance purposes. Critical aspects of these applications are emphasized and the choices adopted are motivated accordingly. Results are discussed in view of promising future developments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A publication record provides evidence of research productivity and is critical for junior scholars starting their careers in academia. Publication attributes, such as level of the publication outlet, order and number of authors, are typically used to evaluate its quality. However, time spent on a publication is a limited commodity, and researchers face significant trade-offs when deciding which publications they should concentrate on. To better understand the choices made, conjoint analysis with 241 junior IS scholars was conducted. We find that when “quality vs. number of authors” and “quality vs. time” trade-offs are considered, quality is prioritized. However, the emphasis on quality is less pronounced when “rank as an author” is at stake. Especially Ph.D. students tend to choose first authorship when dealing with “quality vs. rank as an author” trade-off. Our findings provide intriguing insights into how publication attributes weigh against each other when research collaboration decisions are made.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present two concurrent semantics (i.e. semantics where concurrency is explicitely represented) for CC programs with atomic tells. One is based on simple partial orders of computation steps, while the other one is based on contextual nets and it is an extensión of a previous one for eventual CC programs. Both such semantics allow us to derive concurrency, dependency, and nondeterminism information for the considered languages. We prove some properties about the relation between the two semantics, and also about the relation between them and the operational semantics. Moreover, we discuss how to use the contextual net semantics in the context of CLP programs. More precisely, by interpreting concurrency as possible parallelism, our semantics can be useful for a safe parallelization of some CLP computation steps. Dually, the dependency information may also be interpreted as necessary sequentialization, thus possibly exploiting it for the task of scheduling CC programs. Moreover, our semantics is also suitable for CC programs with a new kind of atomic tell (called locally atomic tell), which checks for consistency only the constraints it depends on. Such a tell achieves a reasonable trade-off between efficiency and atomicity, since the checked constraints can be stored in a local memory and are thus easily accessible even in a distributed implementation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tesis se ha realizado en el contexto del proyecto UPMSat-2, que es un microsatélite diseñado, construido y operado por el Instituto Universitario de Microgravedad "Ignacio Da Riva" (IDR / UPM) de la Universidad Politécnica de Madrid. Aplicación de la metodología Ingeniería Concurrente (Concurrent Engineering: CE) en el marco de la aplicación de diseño multidisciplinar (Multidisciplinary Design Optimization: MDO) es uno de los principales objetivos del presente trabajo. En los últimos años, ha habido un interés continuo en la participación de los grupos de investigación de las universidades en los estudios de la tecnología espacial a través de sus propios microsatélites. La participación en este tipo de proyectos tiene algunos desafíos inherentes, tales como presupuestos y servicios limitados. Además, debido al hecho de que el objetivo principal de estos proyectos es fundamentalmente educativo, por lo general hay incertidumbres en cuanto a su misión en órbita y cargas útiles en las primeras fases del proyecto. Por otro lado, existen limitaciones predeterminadas para sus presupuestos de masa, volumen y energía, debido al hecho de que la mayoría de ellos están considerados como una carga útil auxiliar para el lanzamiento. De este modo, el costo de lanzamiento se reduce considerablemente. En este contexto, el subsistema estructural del satélite es uno de los más afectados por las restricciones que impone el lanzador. Esto puede afectar a diferentes aspectos, incluyendo las dimensiones, la resistencia y los requisitos de frecuencia. En la primera parte de esta tesis, la atención se centra en el desarrollo de una herramienta de diseño del subsistema estructural que evalúa, no sólo las propiedades de la estructura primaria como variables, sino también algunas variables de nivel de sistema del satélite, como la masa de la carga útil y la masa y las dimensiones extremas de satélite. Este enfoque permite que el equipo de diseño obtenga una mejor visión del diseño en un espacio de diseño extendido. La herramienta de diseño estructural se basa en las fórmulas y los supuestos apropiados, incluyendo los modelos estáticos y dinámicos del satélite. Un algoritmo genético (Genetic Algorithm: GA) se aplica al espacio de diseño para optimizaciones de objetivo único y también multiobjetivo. El resultado de la optimización multiobjetivo es un Pareto-optimal basado en dos objetivo, la masa total de satélites mínimo y el máximo presupuesto de masa de carga útil. Por otro lado, la aplicación de los microsatélites en misiones espaciales es de interés por su menor coste y tiempo de desarrollo. La gran necesidad de las aplicaciones de teledetección es un fuerte impulsor de su popularidad en este tipo de misiones espaciales. Las misiones de tele-observación por satélite son esenciales para la investigación de los recursos de la tierra y el medio ambiente. En estas misiones existen interrelaciones estrechas entre diferentes requisitos como la altitud orbital, tiempo de revisita, el ciclo de vida y la resolución. Además, todos estos requisitos puede afectar a toda las características de diseño. Durante los últimos años la aplicación de CE en las misiones espaciales ha demostrado una gran ventaja para llegar al diseño óptimo, teniendo en cuenta tanto el rendimiento y el costo del proyecto. Un ejemplo bien conocido de la aplicación de CE es la CDF (Facilidad Diseño Concurrente) de la ESA (Agencia Espacial Europea). Está claro que para los proyectos de microsatélites universitarios tener o desarrollar una instalación de este tipo parece estar más allá de las capacidades del proyecto. Sin embargo, la práctica de la CE a cualquier escala puede ser beneficiosa para los microsatélites universitarios también. En la segunda parte de esta tesis, la atención se centra en el desarrollo de una estructura de optimización de diseño multidisciplinar (Multidisciplinary Design Optimization: MDO) aplicable a la fase de diseño conceptual de microsatélites de teledetección. Este enfoque permite que el equipo de diseño conozca la interacción entre las diferentes variables de diseño. El esquema MDO presentado no sólo incluye variables de nivel de sistema, tales como la masa total del satélite y la potencia total, sino también los requisitos de la misión como la resolución y tiempo de revisita. El proceso de diseño de microsatélites se divide en tres disciplinas; a) diseño de órbita, b) diseño de carga útil y c) diseño de plataforma. En primer lugar, se calculan diferentes parámetros de misión para un rango práctico de órbitas helio-síncronas (sun-synchronous orbits: SS-Os). Luego, según los parámetros orbitales y los datos de un instrumento como referencia, se calcula la masa y la potencia de la carga útil. El diseño de la plataforma del satélite se estima a partir de los datos de la masa y potencia de los diferentes subsistemas utilizando relaciones empíricas de diseño. El diseño del subsistema de potencia se realiza teniendo en cuenta variables de diseño más detalladas, como el escenario de la misión y diferentes tipos de células solares y baterías. El escenario se selecciona, de modo de obtener una banda de cobertura sobre la superficie terrestre paralelo al Ecuador después de cada intervalo de revisita. Con el objetivo de evaluar las interrelaciones entre las diferentes variables en el espacio de diseño, todas las disciplinas de diseño mencionados se combinan en un código unificado. Por último, una forma básica de MDO se ajusta a la herramienta de diseño de sistema de satélite. La optimización del diseño se realiza por medio de un GA con el único objetivo de minimizar la masa total de microsatélite. Según los resultados obtenidos de la aplicación del MDO, existen diferentes puntos de diseños óptimos, pero con diferentes variables de misión. Este análisis demuestra la aplicabilidad de MDO para los estudios de ingeniería de sistema en la fase de diseño conceptual en este tipo de proyectos. La principal conclusión de esta tesis, es que el diseño clásico de los satélites que por lo general comienza con la definición de la misión y la carga útil no es necesariamente la mejor metodología para todos los proyectos de satélites. Un microsatélite universitario, es un ejemplo de este tipo de proyectos. Por eso, se han desarrollado un conjunto de herramientas de diseño para encarar los estudios de la fase inicial de diseño. Este conjunto de herramientas incluye diferentes disciplinas de diseño centrados en el subsistema estructural y teniendo en cuenta una carga útil desconocida a priori. Los resultados demuestran que la mínima masa total del satélite y la máxima masa disponible para una carga útil desconocida a priori, son objetivos conflictivos. En este contexto para encontrar un Pareto-optimal se ha aplicado una optimización multiobjetivo. Según los resultados se concluye que la selección de la masa total por satélite en el rango de 40-60 kg puede considerarse como óptima para un proyecto de microsatélites universitario con carga útil desconocida a priori. También la metodología CE se ha aplicado al proceso de diseño conceptual de microsatélites de teledetección. Los resultados de la aplicación del CE proporcionan una clara comprensión de la interacción entre los requisitos de diseño de sistemas de satélites, tales como la masa total del microsatélite y la potencia y los requisitos de la misión como la resolución y el tiempo de revisita. La aplicación de MDO se hace con la minimización de la masa total de microsatélite. Los resultados de la aplicación de MDO aclaran la relación clara entre los diferentes requisitos de diseño del sistema y de misión, así como que permiten seleccionar las líneas de base para el diseño óptimo con el objetivo seleccionado en las primeras fase de diseño. ABSTRACT This thesis is done in the context of UPMSat-2 project, which is a microsatellite under design and manufacturing at the Instituto Universitario de Microgravedad “Ignacio Da Riva” (IDR/UPM) of the Universidad Politécnica de Madrid. Application of Concurrent Engineering (CE) methodology in the framework of Multidisciplinary Design application (MDO) is one of the main objectives of the present work. In recent years, there has been continuing interest in the participation of university research groups in space technology studies by means of their own microsatellites. The involvement in such projects has some inherent challenges, such as limited budget and facilities. Also, due to the fact that the main objective of these projects is for educational purposes, usually there are uncertainties regarding their in orbit mission and scientific payloads at the early phases of the project. On the other hand, there are predetermined limitations for their mass and volume budgets owing to the fact that most of them are launched as an auxiliary payload in which the launch cost is reduced considerably. The satellite structure subsystem is the one which is most affected by the launcher constraints. This can affect different aspects, including dimensions, strength and frequency requirements. In the first part of this thesis, the main focus is on developing a structural design sizing tool containing not only the primary structures properties as variables but also the satellite system level variables such as payload mass budget and satellite total mass and dimensions. This approach enables the design team to obtain better insight into the design in an extended design envelope. The structural design sizing tool is based on the analytical structural design formulas and appropriate assumptions including both static and dynamic models of the satellite. A Genetic Algorithm (GA) is applied to the design space for both single and multiobejective optimizations. The result of the multiobjective optimization is a Pareto-optimal based on two objectives, minimum satellite total mass and maximum payload mass budget. On the other hand, the application of the microsatellites is of interest for their less cost and response time. The high need for the remote sensing applications is a strong driver of their popularity in space missions. The satellite remote sensing missions are essential for long term research around the condition of the earth resources and environment. In remote sensing missions there are tight interrelations between different requirements such as orbital altitude, revisit time, mission cycle life and spatial resolution. Also, all of these requirements can affect the whole design characteristics. During the last years application of the CE in the space missions has demonstrated a great advantage to reach the optimum design base lines considering both the performance and the cost of the project. A well-known example of CE application is ESA (European Space Agency) CDF (Concurrent Design Facility). It is clear that for the university-class microsatellite projects having or developing such a facility seems beyond the project capabilities. Nevertheless practicing CE at any scale can be beneficiary for the university-class microsatellite projects. In the second part of this thesis, the main focus is on developing a MDO framework applicable to the conceptual design phase of the remote sensing microsatellites. This approach enables the design team to evaluate the interaction between the different system design variables. The presented MDO framework contains not only the system level variables such as the satellite total mass and total power, but also the mission requirements like the spatial resolution and the revisit time. The microsatellite sizing process is divided into the three major design disciplines; a) orbit design, b) payload sizing and c) bus sizing. First, different mission parameters for a practical range of sun-synchronous orbits (SS-Os) are calculated. Then, according to the orbital parameters and a reference remote sensing instrument, mass and power of the payload are calculated. Satellite bus sizing is done based on mass and power calculation of the different subsystems using design estimation relationships. In the satellite bus sizing, the power subsystem design is realized by considering more detailed design variables including a mission scenario and different types of solar cells and batteries. The mission scenario is selected in order to obtain a coverage belt on the earth surface parallel to the earth equatorial after each revisit time. In order to evaluate the interrelations between the different variables inside the design space all the mentioned design disciplines are combined in a unified code. The integrated satellite system sizing tool developed in this section is considered as an application of the CE to the conceptual design of the remote sensing microsatellite projects. Finally, in order to apply the MDO methodology to the design problem, a basic MDO framework is adjusted to the developed satellite system design tool. Design optimization is done by means of a GA single objective algorithm with the objective function as minimizing the microsatellite total mass. According to the results of MDO application, there exist different optimum design points all with the minimum satellite total mass but with different mission variables. This output demonstrates the successful applicability of MDO approach for system engineering trade-off studies at the conceptual design phase of the design in such projects. The main conclusion of this thesis is that the classical design approach for the satellite design which usually starts with the mission and payload definition is not necessarily the best approach for all of the satellite projects. The university-class microsatellite is an example for such projects. Due to this fact an integrated satellite sizing tool including different design disciplines focusing on the structural subsystem and considering unknown payload is developed. According to the results the satellite total mass and available mass for the unknown payload are conflictive objectives. In order to find the Pareto-optimal a multiobjective GA optimization is conducted. Based on the optimization results it is concluded that selecting the satellite total mass in the range of 40-60 kg can be considered as an optimum approach for a university-class microsatellite project with unknown payload(s). Also, the CE methodology is applied to the remote sensing microsatellites conceptual design process. The results of CE application provide a clear understanding of the interaction between satellite system design requirements such as satellite total mass and power and the satellite mission variables such as revisit time and spatial resolution. The MDO application is done with the total mass minimization of a remote sensing satellite. The results from the MDO application clarify the unclear relationship between different system and mission design variables as well as the optimum design base lines according to the selected objective during the initial design phases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El cerebro humano es probablemente uno de los sistemas más complejos a los que nos enfrentamos en la actualidad, si bien es también uno de los más fascinantes. Sin embargo, la compresión de cómo el cerebro organiza su actividad para llevar a cabo tareas complejas es un problema plagado de restos y obstáculos. En sus inicios la neuroimagen y la electrofisiología tenían como objetivo la identificación de regiones asociadas a activaciones relacionadas con tareas especificas, o con patrones locales que variaban en el tiempo dada cierta actividad. Sin embargo, actualmente existe un consenso acerca de que la actividad cerebral tiene un carácter temporal multiescala y espacialmente extendido, lo que lleva a considerar el cerebro como una gran red de áreas cerebrales coordinadas, cuyas conexiones funcionales son continuamente creadas y destruidas. Hasta hace poco, el énfasis de los estudios de la actividad cerebral funcional se han centrado en la identidad de los nodos particulares que forman estas redes, y en la caracterización de métricas de conectividad entre ellos: la hipótesis subyacente es que cada nodo, que es una representación mas bien aproximada de una región cerebral dada, ofrece a una única contribución al total de la red. Por tanto, la neuroimagen funcional integra los dos ingredientes básicos de la neuropsicología: la localización de la función cognitiva en módulos cerebrales especializados y el rol de las fibras de conexión en la integración de dichos módulos. Sin embargo, recientemente, la estructura y la función cerebral han empezado a ser investigadas mediante la Ciencia de la Redes, una interpretación mecánico-estadística de una antigua rama de las matemáticas: La teoría de grafos. La Ciencia de las Redes permite dotar a las redes funcionales de una gran cantidad de propiedades cuantitativas (robustez, centralidad, eficiencia, ...), y así enriquecer el conjunto de elementos que describen objetivamente la estructura y la función cerebral a disposición de los neurocientíficos. La conexión entre la Ciencia de las Redes y la Neurociencia ha aportado nuevos puntos de vista en la comprensión de la intrincada anatomía del cerebro, y de cómo las patrones de actividad cerebral se pueden sincronizar para generar las denominadas redes funcionales cerebrales, el principal objeto de estudio de esta Tesis Doctoral. Dentro de este contexto, la complejidad emerge como el puente entre las propiedades topológicas y dinámicas de los sistemas biológicos y, específicamente, en la relación entre la organización y la dinámica de las redes funcionales cerebrales. Esta Tesis Doctoral es, en términos generales, un estudio de cómo la actividad cerebral puede ser entendida como el resultado de una red de un sistema dinámico íntimamente relacionado con los procesos que ocurren en el cerebro. Con este fin, he realizado cinco estudios que tienen en cuenta ambos aspectos de dichas redes funcionales: el topológico y el dinámico. De esta manera, la Tesis está dividida en tres grandes partes: Introducción, Resultados y Discusión. En la primera parte, que comprende los Capítulos 1, 2 y 3, se hace un resumen de los conceptos más importantes de la Ciencia de las Redes relacionados al análisis de imágenes cerebrales. Concretamente, el Capitulo 1 está dedicado a introducir al lector en el mundo de la complejidad, en especial, a la complejidad topológica y dinámica de sistemas acoplados en red. El Capítulo 2 tiene como objetivo desarrollar los fundamentos biológicos, estructurales y funcionales del cerebro, cuando éste es interpretado como una red compleja. En el Capítulo 3, se resumen los objetivos esenciales y tareas que serán desarrolladas a lo largo de la segunda parte de la Tesis. La segunda parte es el núcleo de la Tesis, ya que contiene los resultados obtenidos a lo largo de los últimos cuatro años. Esta parte está dividida en cinco Capítulos, que contienen una versión detallada de las publicaciones llevadas a cabo durante esta Tesis. El Capítulo 4 está relacionado con la topología de las redes funcionales y, específicamente, con la detección y cuantificación de los nodos mas importantes: aquellos denominados “hubs” de la red. En el Capítulo 5 se muestra como las redes funcionales cerebrales pueden ser vistas no como una única red, sino más bien como una red-de-redes donde sus componentes tienen que coexistir en una situación de balance funcional. De esta forma, se investiga cómo los hemisferios cerebrales compiten para adquirir centralidad en la red-de-redes, y cómo esta interacción se mantiene (o no) cuando se introducen fallos deliberadamente en la red funcional. El Capítulo 6 va un paso mas allá al considerar las redes funcionales como sistemas vivos. En este Capítulo se muestra cómo al analizar la evolución de la topología de las redes, en vez de tratarlas como si estas fueran un sistema estático, podemos caracterizar mejor su estructura. Este hecho es especialmente relevante cuando se quiere tratar de encontrar diferencias entre grupos que desempeñan una tarea de memoria, en la que las redes funcionales tienen fuertes fluctuaciones. En el Capítulo 7 defino cómo crear redes parenclíticas a partir de bases de datos de actividad cerebral. Este nuevo tipo de redes, recientemente introducido para estudiar las anormalidades entre grupos de control y grupos anómalos, no ha sido implementado nunca en datos cerebrales y, en este Capítulo explico cómo hacerlo cuando se quiere evaluar la consistencia de la dinámica cerebral. Para concluir esta parte de la Tesis, el Capítulo 8 se centra en la relación entre las propiedades topológicas de los nodos dentro de una red y sus características dinámicas. Como mostraré más adelante, existe una relación entre ellas que revela que la posición de un nodo dentro una red está íntimamente correlacionada con sus propiedades dinámicas. Finalmente, la última parte de esta Tesis Doctoral está compuesta únicamente por el Capítulo 9, el cual contiene las conclusiones y perspectivas futuras que pueden surgir de los trabajos expuestos. En vista de todo lo anterior, espero que esta Tesis aporte una perspectiva complementaria sobre uno de los más extraordinarios sistemas complejos frente a los que nos encontramos: El cerebro humano. ABSTRACT The human brain is probably one of the most complex systems we are facing, thus being a timely and fascinating object of study. Characterizing how the brain organizes its activity to carry out complex tasks is highly non-trivial. While early neuroimaging and electrophysiological studies typically aimed at identifying patches of task-specific activations or local time-varying patterns of activity, there has now been consensus that task-related brain activity has a temporally multiscale, spatially extended character, as networks of coordinated brain areas are continuously formed and destroyed. Up until recently, though, the emphasis of functional brain activity studies has been on the identity of the particular nodes forming these networks, and on the characterization of connectivity metrics between them, the underlying covert hypothesis being that each node, constituting a coarse-grained representation of a given brain region, provides a unique contribution to the whole. Thus, functional neuroimaging initially integrated the two basic ingredients of early neuropsychology: localization of cognitive function into specialized brain modules and the role of connection fibres in the integration of various modules. Lately, brain structure and function have started being investigated using Network Science, a statistical mechanics understanding of an old branch of pure mathematics: graph theory. Network Science allows endowing networks with a great number of quantitative properties, thus vastly enriching the set of objective descriptors of brain structure and function at neuroscientists’ disposal. The link between Network Science and Neuroscience has shed light about how the entangled anatomy of the brain is, and how cortical activations may synchronize to generate the so-called functional brain networks, the principal object under study along this PhD Thesis. Within this context, complexity appears to be the bridge between the topological and dynamical properties of biological systems and, more specifically, the interplay between the organization and dynamics of functional brain networks. This PhD Thesis is, in general terms, a study of how cortical activations can be understood as the output of a network of dynamical systems that are intimately related with the processes occurring in the brain. In order to do that, I performed five studies that encompass both the topological and the dynamical aspects of such functional brain networks. In this way, the Thesis is divided into three major parts: Introduction, Results and Discussion. In the first part, comprising Chapters 1, 2 and 3, I make an overview of the main concepts of Network Science related to the analysis of brain imaging. More specifically, Chapter 1 is devoted to introducing the reader to the world of complexity, specially to the topological and dynamical complexity of networked systems. Chapter 2 aims to develop the biological, topological and functional fundamentals of the brain when it is seen as a complex network. Next, Chapter 3 summarizes the main objectives and tasks that will be developed along the forthcoming Chapters. The second part of the Thesis is, in turn, its core, since it contains the results obtained along these last four years. This part is divided into five Chapters, containing a detailed version of the publications carried out during the Thesis. Chapter 4 is related to the topology of functional networks and, more specifically, to the detection and quantification of the leading nodes of the network: the hubs. In Chapter 5 I will show that functional brain networks can be viewed not as a single network, but as a network-of-networks, where its components have to co-exist in a trade-off situation. In this way, I investigate how the brain hemispheres compete for acquiring the centrality of the network-of-networks and how this interplay is maintained (or not) when failures are introduced in the functional network. Chapter 6 goes one step beyond by considering functional networks as living systems. In this Chapter I show how analyzing the evolution of the network topology instead of treating it as a static system allows to better characterize functional networks. This fact is especially relevant when trying to find differences between groups performing certain memory tasks, where functional networks have strong fluctuations. In Chapter 7 I define how to create parenclitic networks from brain imaging datasets. This new kind of networks, recently introduced to study abnormalities between control and anomalous groups, have not been implemented with brain datasets and I explain in this Chapter how to do it when evaluating the consistency of brain dynamics. To conclude with this part of the Thesis, Chapter 8 is devoted to the interplay between the topological properties of the nodes within a network and their dynamical features. As I will show, there is an interplay between them which reveals that the position of a node in a network is intimately related with its dynamical properties. Finally, the last part of this PhD Thesis is composed only by Chapter 9, which contains the conclusions and future perspectives that may arise from the exposed results. In view of all, I hope that reading this Thesis will give a complementary perspective of one of the most extraordinary complex systems: The human brain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As várias teorias acerca da estrutura de capital despertam interesse motivando diversos estudos sobre o assunto sem, no entanto, ter um consenso. Outro tema aparentemente pouco explorado refere-se ao ciclo de vida das empresas e como ele pode influenciar a estrutura de capital. Este estudo teve como objetivo verificar quais determinantes possuem maior relevância no endividamento das empresas e se estes determinantes alteram-se dependendo do ciclo de vida da empresa apoiada pelas teorias Trade Off, Pecking Order e Teoria da Agência. Para alcançar o objetivo deste trabalho foi utilizado análise em painel de efeito fixo sendo a amostra composta por empresas brasileiras de capital aberto, com dados secundários disponíveis na Economática® no período de 2005 a 2013, utilizando-se os setores da BM&FBOVESPA. Como resultado principal destaca-se o mesmo comportamento entre a amostra geral, alto e baixo crescimento pelo endividamento contábil para o determinante Lucratividade apresentando uma relação negativa, e para os determinantes Oportunidade de Crescimento e Tamanho, estes com uma relação positiva. Para os grupos de alto e baixo crescimento alguns determinantes apresentaram resultados diferentes, como a singularidade que resultou significância nestes dois grupos, sendo positiva no baixo crescimento e negativa no alto crescimento, para o valor colateral dos ativos e benefício fiscal não dívida apresentaram significância apenas no grupo de baixo crescimento. Para o endividamento a valor de mercado foi observado significância para o Benefício fiscal não dívida e Singularidade. Este resultado reforça o argumento de que o ciclo de vida influência a estrutura de capital.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As várias teorias acerca da estrutura de capital despertam interesse motivando diversos estudos sobre o assunto sem, no entanto, ter um consenso. Outro tema aparentemente pouco explorado refere-se ao ciclo de vida das empresas e como ele pode influenciar a estrutura de capital. Este estudo teve como objetivo verificar quais determinantes possuem maior relevância no endividamento das empresas e se estes determinantes alteram-se dependendo do ciclo de vida da empresa apoiada pelas teorias Trade Off, Pecking Order e Teoria da Agência. Para alcançar o objetivo deste trabalho foi utilizado análise em painel de efeito fixo sendo a amostra composta por empresas brasileiras de capital aberto, com dados secundários disponíveis na Economática® no período de 2005 a 2013, utilizando-se os setores da BM&FBOVESPA. Como resultado principal destaca-se o mesmo comportamento entre a amostra geral, alto e baixo crescimento pelo endividamento contábil para o determinante Lucratividade apresentando uma relação negativa, e para os determinantes Oportunidade de Crescimento e Tamanho, estes com uma relação positiva. Para os grupos de alto e baixo crescimento alguns determinantes apresentaram resultados diferentes, como a singularidade que resultou significância nestes dois grupos, sendo positiva no baixo crescimento e negativa no alto crescimento, para o valor colateral dos ativos e benefício fiscal não dívida apresentaram significância apenas no grupo de baixo crescimento. Para o endividamento a valor de mercado foi observado significância para o Benefício fiscal não dívida e Singularidade. Este resultado reforça o argumento de que o ciclo de vida influência a estrutura de capital

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Market mechanisms are a means by which resources in contention can be allocated between contending parties, both in human economies and those populated by software agents. Designing such mechanisms has traditionally been carried out by hand, and more recently by automation. Assessing these mechanisms typically involves them being evaluated with respect to multiple conflicting objectives, which can often be nonlinear, noisy, and expensive to compute. For typical performance objectives, it is known that designed mechanisms often fall short on being optimal across all objectives simultaneously. However, in all previous automated approaches, either only a single objective is considered, or else the multiple performance objectives are combined into a single objective. In this paper we do not aggregate objectives, instead considering a direct, novel application of multi-objective evolutionary algorithms (MOEAs) to the problem of automated mechanism design. This allows the automatic discovery of trade-offs that such objectives impose on mechanisms. We pose the problem of mechanism design, specifically for the class of linear redistribution mechanisms, as a naturally existing multi-objective optimisation problem. We apply a modified version of NSGA-II in order to design mechanisms within this class, given economically relevant objectives such as welfare and fairness. This application of NSGA-II exposes tradeoffs between objectives, revealing relationships between them that were otherwise unknown for this mechanism class. The understanding of the trade-off gained from the application of MOEAs can thus help practitioners with an insightful application of discovered mechanisms in their respective real/artificial markets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study aims to investigate the influence of the asset class and the breakdown of tangibility as determinant factors of the capital structure of companies listed on the BM & FBOVESPA in the period of 2008-2012. Two current assets classes were composed and once they were grouped by liquidity, they were also analyzed by the financial institutions for credit granting: current resources (Cash, Bank and Financial Applications) and operations with duplicates (Stocks and Receivables). The breakdown of the tangible assets was made based on its main components provided as warrantees for loans like Machinery & Equipment and Land & Buildings. For an analysis extension, three metrics for leverage (accounting, financial and market) were applied and the sample was divided into economic sectors, adopted by BM&FBOVESPA. The data model in dynamic panel estimated by a systemic GMM of two levels was used in this study due its strength to problems of endogenous relationship as well as the omitted variables bias. The found results suggest that current resources are determinants of the capital structure possibly because they re characterized as proxies for financial solvency, being its relationship with debt positive. The sectorial analysis confirmed the results for current resources. The tangibility of assets has inverse proportional relationship with the leverage. As it is disintegrated in its main components, the significant and negative influence of machinery & equipment was more marked in the Industrial Goods sector. This result shows that, on average, the most specific assets from operating activities of a company compete for a less use of third party resources. As complementary results, it was observed that the leverage has persistence, which is linked with the static trade-off theory. Specifically for financial leverage, it was observed that the persistence is relevant when it is controlled for the lagged current assets classes variables. The proxy variable for growth opportunities, measured by the Market -to -Book, has the sign of its contradictory coefficient. The company size has a positive relationship with debt, in favor of static trade-off theory. Profitability is the most consistent variable in all the performed estimations, showing strong negative and significant relationship with leverage, as the pecking order theory predicts

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A key aspect underpinning life-history theory is the existence of trade-offs. Trade-offs occur because resources are limited, meaning that individuals cannot invest in all traits simultaneously, leading to costs for traits such as growth and reproduction. Such costs may be the reason for the sub-maximal growth rates that are often observed in nature, though the fitness consequences of these costs would depend on the effects on lifetime reproductive success. Recently, much attention has been given to the physiological mechanism that might underlie these life-history trade-offs, with oxidative stress (OS) playing a key role. OS is characterised by a build-up of oxidative damage to tissues (e.g. protein, lipids and DNA) from attack by reactive species (RS). RS, the majority of which are by-products of metabolism, are usually neutralised by antioxidants, however OS occurs when there is an imbalance between the two. There are two main theories linking OS with growth and reproduction. The first is that traits like growth and reproduction, being metabolically demanding, lead to an increase in RS production. The second involves the diversion of resources away from self-maintenance processes (e.g. the redox system) when individuals are faced with enhanced growth or reproductive expenditure. Previous research investigating trade-offs involving growth or reproduction and self-maintenance has been equivocal. One reason for this could be that associations among redox biomarkers can vary greatly so that the biomarker selected for analysis can influence the conclusion reached about an individual’s oxidative status. Therefore the first aim of my thesis was to explore the strength and pattern of integration of five biomarkers of OS (three antioxidants, one damage and one general oxidation measure) in wild blue tit (Cyanistes caeruleus) adults and nestlings (Chapter 2). In doing so, I established that all five biomarkers should be included in future analyses, thus using this collection of biomarkers I explored my next aims; whether enhanced growth (Chapters 3 and 4) or reproductive effort (Chapter 5) can lead to increased OS levels, if these traits are traded off against self-maintenance. I accomplished these aims using both a meta-analytic and experimental approach, the latter involving manipulation of brood size in wild blue tits in order to experimentally alter growth rate of nestlings and provisioning rate (a proxy for reproductive expenditure) of adults. I also investigated the potential for redox integration to be used as an index of body condition (Chapter 2), allowing predictions about future fitness consequences of changes to oxidative state to be made. A growth – self-maintenance trade off was supported by my meta-analytic results (Chapter 4) which found OS to be a constraint on growth. However, when faced with experimentally enhanced growth, animals were typically not able to adjust this trade-off so that oxidative damage resulted. This might support the idea that energetically expensive growth causes resources to be diverted away from the redox system; however, antioxidants did not show an overall reduction in response to growth in the meta-analysis suggesting that oxidative costs of growth may result from increased RS production due to the greater metabolism needed for enhanced growth. My experimental data (Chapter 3) showed a similar pattern, with raised protein damage levels (protein carbonyls; PCs) in the fastest growing blue tit chicks in a brood, compared with their slower growing sibs. These within-brood differences in OS levels likely resulted from within-brood hierarchies and might have masked any between-brood differences, which were not observed here. Despite evidence for a growth – self-maintenance trade off, my experimental results on blue tits found no support for the hypothesis that self-maintenance is also traded off against reproduction, another energetically demanding trait. There was no link between experimentally altered reproductive expenditure and OS, nor was there a direct correlation between reproductive effort and OS (Chapter 5). However, there are various factors that likely influence whether oxidative costs are observed, including environmental conditions and whether such costs are transient. This emphasises the need for longitudinal studies following the same individuals over multiple years and across a wide range of habitats that differ in quality. This would allow investigation into how key life events interact; it might be that raised OS levels from rapid early growth have the potential to constrain reproduction or that high parental OS levels constrain offspring growth. Any oxidative costs resulting from these life-history trade-offs have the potential to impact on future fitness. Redox integration of certain biomarkers might prove to be a useful tool in making predictions about fitness, as I found in Chapter 2, as well as establishing how the redox system responds, as a whole, to changes to growth and reproduction. Finally, if the tissues measured can tolerate a given level of OS, then the level of oxidative damage might be irrelevant and not impact on future fitness at all.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Natural language processing has achieved great success in a wide range of ap- plications, producing both commercial language services and open-source language tools. However, most methods take a static or batch approach, assuming that the model has all information it needs and makes a one-time prediction. In this disser- tation, we study dynamic problems where the input comes in a sequence instead of all at once, and the output must be produced while the input is arriving. In these problems, predictions are often made based only on partial information. We see this dynamic setting in many real-time, interactive applications. These problems usually involve a trade-off between the amount of input received (cost) and the quality of the output prediction (accuracy). Therefore, the evaluation considers both objectives (e.g., plotting a Pareto curve). Our goal is to develop a formal understanding of sequential prediction and decision-making problems in natural language processing and to propose efficient solutions. Toward this end, we present meta-algorithms that take an existent batch model and produce a dynamic model to handle sequential inputs and outputs. Webuild our framework upon theories of Markov Decision Process (MDP), which allows learning to trade off competing objectives in a principled way. The main machine learning techniques we use are from imitation learning and reinforcement learning, and we advance current techniques to tackle problems arising in our settings. We evaluate our algorithm on a variety of applications, including dependency parsing, machine translation, and question answering. We show that our approach achieves a better cost-accuracy trade-off than the batch approach and heuristic-based decision- making approaches. We first propose a general framework for cost-sensitive prediction, where dif- ferent parts of the input come at different costs. We formulate a decision-making process that selects pieces of the input sequentially, and the selection is adaptive to each instance. Our approach is evaluated on both standard classification tasks and a structured prediction task (dependency parsing). We show that it achieves similar prediction quality to methods that use all input, while inducing a much smaller cost. Next, we extend the framework to problems where the input is revealed incremen- tally in a fixed order. We study two applications: simultaneous machine translation and quiz bowl (incremental text classification). We discuss challenges in this set- ting and show that adding domain knowledge eases the decision-making problem. A central theme throughout the chapters is an MDP formulation of a challenging problem with sequential input/output and trade-off decisions, accompanied by a learning algorithm that solves the MDP.