887 resultados para Specific theories and interaction models
Resumo:
Geoelectrical soundings were carried out in 29 different places in order to find permafrost and to measure its thickness. In most places above timber Iine a permafrost thickness of 10-50 m was recorded. Permafrost was found at sites with thin snow cover during winter. Here, deflation phenomena on the summits of fjells indicate the occurence of permafrost, Vegetation type might be a good indicator of permafrost, too. It seems obvious that permafrost exists extensively on fjell summits of northern Finland.
Resumo:
This article characterizes key weaknesses in the ability of current digital libraries to support scholarly inquiry, and as a way to address these, proposes computational services grounded in semiformal models of the naturalistic argumentation commonly found in research literatures. It is argued that a design priority is to balance formal expressiveness with usability, making it critical to coevolve the modeling scheme with appropriate user interfaces for argument construction and analysis. We specify the requirements for an argument modeling scheme for use by untrained researchers and describe the resulting ontology, contrasting it with other domain modeling and semantic web approaches, before discussing passive and intelligent user interfaces designed to support analysts in the construction, navigation, and analysis of scholarly argument structures in a Web-based environment. © 2007 Wiley Periodicals, Inc. Int J Int Syst 22: 17–47, 2007.
Resumo:
A significant forum of scholarly and practitioner-based research has developed in recent years that has sought both to theorize upon and empirically measure the competitiveness of regions. However, the disparate and fragmented nature of this work has led to the lack of a substantive theoretical foundation underpinning the various analyses and measurement methodologies employed. The aim of this paper is to place the regional competitiveness discourse within the context of theories of economic growth, and more particularly, those concerning regional economic growth. It is argued that regional competitiveness models are usually implicitly constructed in the lineage of endogenous growth frameworks, whereby deliberate investments in factors such as human capital and knowledge are considered to be key drivers of growth differentials. This leads to the suggestion that regional competitiveness can be usefully defined as the capacity and capability of regions to achieve economic growth relative to other regions at a similar overall stage of economic development, which will usually be within their own nation or continental bloc. The paper further assesses future avenues for theoretical and methodological exploration, highlighting the role of institutions, resilience and, well-being in understanding how the competitiveness of regions influences their long-term evolution.
Resumo:
A modernkori számvitel egyik alapvető kérdése, hogy a pénzügyi beszámolás címzettjét – az érdekhordozókat – miként lehet azonosítani. Ez a törekvés már a klasszikus, azóta meghaladottá vált elméletekben is központi szerepet töltött be és modern, posztmodern elméletekben kulcsfontosságúvá vált. A tapasztalatok alapján az azonosított érdekhordozók köre módosult, bővült. Ennek a fejlődésnek a vizsgálata során a számvitel számos olyan ismérvét sikerült azonosítani, amely segítségével a vonatkozó szabályok tökéletesíthetők. Emellett az evolúció vizsgálata segítségével közvetlenül is megfigyelhetővé vált az, hogy a számvitelt extern módon szabályozó hatalom szükségessége milyen feltételek teljesítése mellett igazolható. A vizsgálat során azonosíthatóvá váltak olyan helyzetek, amikor a számviteli szabályozó és „kívülről irányított” pénzügyi beszámolás szuboptimális helyzethez vezet. A cikk az érdekhordozói elméletek fejlődését a klasszikus felfogásoktól indulva mutatja be. Feltárja, hogy a modern – jelenleg elfogadott – koalíciós vállalatfelfogás miben hozott újat, elsősorban miként hívta életre az extern szabályozót. _____ One of the key problems of the modern financial accounting is how to define the stakeholders. This problem was already a key issue in the already outdated classical stakeholder theories. Research and experience noted that the group of stakeholders has widened and has been modified. Through this evolution researchers identified many characteristics of financial reporting through which the regulation could have been improved. This advance pointed out which are the situations when the existence of an extern accounting regulator may be justified, since under given circumstances this existence led to suboptimal scenario. This paper deals with the stakeholder theories, starting with the classical ones. The article points out how did the currently accepted theory changed the assertions of the previous one and how was the external regulator created as an inevitable consequence. The paper also highlights the main issues raised by the post-modern theories; those, which try to fit the current questions into the current stakeholder models. The article also produces a Hungarian evidence for the previously mentioned suboptimal scenario, where the not tax-driven regulation proves to be suboptimal.
Resumo:
Resource allocation decisions are made to serve the current emergency without knowing which future emergency will be occurring. Different ordered combinations of emergencies result in different performance outcomes. Even though future decisions can be anticipated with scenarios, previous models follow an assumption that events over a time interval are independent. This dissertation follows an assumption that events are interdependent, because speed reduction and rubbernecking due to an initial incident provoke secondary incidents. The misconception that secondary incidents are not common has resulted in overlooking a look-ahead concept. This dissertation is a pioneer in relaxing the structural assumptions of independency during the assignment of emergency vehicles. When an emergency is detected and a request arrives, an appropriate emergency vehicle is immediately dispatched. We provide tools for quantifying impacts based on fundamentals of incident occurrences through identification, prediction, and interpretation of secondary incidents. A proposed online dispatching model minimizes the cost of moving the next emergency unit, while making the response as close to optimal as possible. Using the look-ahead concept, the online model flexibly re-computes the solution, basing future decisions on present requests. We introduce various online dispatching strategies with visualization of the algorithms, and provide insights on their differences in behavior and solution quality. The experimental evidence indicates that the algorithm works well in practice. After having served a designated request, the available and/or remaining vehicles are relocated to a new base for the next emergency. System costs will be excessive if delay regarding dispatching decisions is ignored when relocating response units. This dissertation presents an integrated method with a principle of beginning with a location phase to manage initial incidents and progressing through a dispatching phase to manage the stochastic occurrence of next incidents. Previous studies used the frequency of independent incidents and ignored scenarios in which two incidents occurred within proximal regions and intervals. The proposed analytical model relaxes the structural assumptions of Poisson process (independent increments) and incorporates evolution of primary and secondary incident probabilities over time. The mathematical model overcomes several limiting assumptions of the previous models, such as no waiting-time, returning rule to original depot, and fixed depot. The temporal locations flexible with look-ahead are compared with current practice that locates units in depots based on Poisson theory. A linearization of the formulation is presented and an efficient heuristic algorithm is implemented to deal with a large-scale problem in real-time.
Resumo:
Secure computation involves multiple parties computing a common function while keeping their inputs private, and is a growing field of cryptography due to its potential for maintaining privacy guarantees in real-world applications. However, current secure computation protocols are not yet efficient enough to be used in practice. We argue that this is due to much of the research effort being focused on generality rather than specificity. Namely, current research tends to focus on constructing and improving protocols for the strongest notions of security or for an arbitrary number of parties. However, in real-world deployments, these security notions are often too strong, or the number of parties running a protocol would be smaller. In this thesis we make several steps towards bridging the efficiency gap of secure computation by focusing on constructing efficient protocols for specific real-world settings and security models. In particular, we make the following four contributions: - We show an efficient (when amortized over multiple runs) maliciously secure two-party secure computation (2PC) protocol in the multiple-execution setting, where the same function is computed multiple times by the same pair of parties. - We improve the efficiency of 2PC protocols in the publicly verifiable covert security model, where a party can cheat with some probability but if it gets caught then the honest party obtains a certificate proving that the given party cheated. - We show how to optimize existing 2PC protocols when the function to be computed includes predicate checks on its inputs. - We demonstrate an efficient maliciously secure protocol in the three-party setting.
Resumo:
We study a climatologically important interaction of two of the main components of the geophysical system by adding an energy balance model for the averaged atmospheric temperature as dynamic boundary condition to a diagnostic ocean model having an additional spatial dimension. In this work, we give deeper insight than previous papers in the literature, mainly with respect to the 1990 pioneering model by Watts and Morantine. We are taking into consideration the latent heat for the two phase ocean as well as a possible delayed term. Non-uniqueness for the initial boundary value problem, uniqueness under a non-degeneracy condition and the existence of multiple stationary solutions are proved here. These multiplicity results suggest that an S-shaped bifurcation diagram should be expected to occur in this class of models generalizing previous energy balance models. The numerical method applied to the model is based on a finite volume scheme with nonlinear weighted essentially non-oscillatory reconstruction and Runge–Kutta total variation diminishing for time integration.
Resumo:
Inverse problems are at the core of many challenging applications. Variational and learning models provide estimated solutions of inverse problems as the outcome of specific reconstruction maps. In the variational approach, the result of the reconstruction map is the solution of a regularized minimization problem encoding information on the acquisition process and prior knowledge on the solution. In the learning approach, the reconstruction map is a parametric function whose parameters are identified by solving a minimization problem depending on a large set of data. In this thesis, we go beyond this apparent dichotomy between variational and learning models and we show they can be harmoniously merged in unified hybrid frameworks preserving their main advantages. We develop several highly efficient methods based on both these model-driven and data-driven strategies, for which we provide a detailed convergence analysis. The arising algorithms are applied to solve inverse problems involving images and time series. For each task, we show the proposed schemes improve the performances of many other existing methods in terms of both computational burden and quality of the solution. In the first part, we focus on gradient-based regularized variational models which are shown to be effective for segmentation purposes and thermal and medical image enhancement. We consider gradient sparsity-promoting regularized models for which we develop different strategies to estimate the regularization strength. Furthermore, we introduce a novel gradient-based Plug-and-Play convergent scheme considering a deep learning based denoiser trained on the gradient domain. In the second part, we address the tasks of natural image deblurring, image and video super resolution microscopy and positioning time series prediction, through deep learning based methods. We boost the performances of supervised, such as trained convolutional and recurrent networks, and unsupervised deep learning strategies, such as Deep Image Prior, by penalizing the losses with handcrafted regularization terms.
Resumo:
In this thesis I show a triple new connection we found between quantum integrability, N=2 supersymmetric gauge theories and black holes perturbation theory. I use the approach of the ODE/IM correspondence between Ordinary Differential Equations (ODE) and Integrable Models (IM), first to connect basic integrability functions - the Baxter’s Q, T and Y functions - to the gauge theory periods. This fundamental identification allows several new results for both theories, for example: an exact non linear integral equation (Thermodynamic Bethe Ansatz, TBA) for the gauge periods; an interpretation of the integrability functional relations as new exact R-symmetry relations for the periods; new formulas for the local integrals of motion in terms of gauge periods. This I develop in all details at least for the SU(2) gauge theory with Nf=0,1,2 matter flavours. Still through to the ODE/IM correspondence, I connect the mathematically precise definition of quasinormal modes of black holes (having an important role in gravitational waves’ obervations) with quantization conditions on the Q, Y functions. In this way I also give a mathematical explanation of the recently found connection between quasinormal modes and N=2 supersymmetric gauge theories. Moreover, it follows a new simple and effective method to numerically compute the quasinormal modes - the TBA - which I compare with other standard methods. The spacetimes for which I show these in all details are in the simplest Nf=0 case the D3 brane in the Nf=1,2 case a generalization of extremal Reissner-Nordström (charged) black holes. Then I begin treating also the Nf=3,4 theories and argue on how our integrability-gauge-gravity correspondence can generalize to other types of black holes in either asymptotically flat (Nf=3) or Anti-de-Sitter (Nf=4) spacetime. Finally I begin to show the extension to a 4-fold correspondence with also Conformal Field Theory (CFT), through the renowned AdS/CFT correspondence.
Resumo:
Bioelectronic interfaces have significantly advanced in recent years, offering potential treatments for vision impairments, spinal cord injuries, and neurodegenerative diseases. However, the classical neurocentric vision drives the technological development toward neurons. Emerging evidence highlights the critical role of glial cells in the nervous system. Among them, astrocytes significantly influence neuronal networks throughout life and are implicated in several neuropathological states. Although they are incapable to fire action potentials, astrocytes communicate through diverse calcium (Ca2+) signalling pathways, crucial for cognitive functions and brain blood flow regulation. Current bioelectronic devices are primarily designed to interface neurons and are unsuitable for studying astrocytes. Graphene, with its unique electrical, mechanical and biocompatibility properties, has emerged as a promising neural interface material. However, its use as electrode interface to modulate astrocyte functionality remains unexplored. The aim of this PhD work was to exploit Graphene-oxide (GO) and reduced GO (rGO)-coated electrodes to control Ca2+ signalling in astrocytes by electrical stimulation. We discovered that distinct Ca2+dynamics in astrocytes can be evoked, in vitro and in brain slices, depending on the conductive/insulating properties of rGO/GO electrodes. Stimulation by rGO electrodes induces intracellular Ca2+ response with sharp peaks of oscillations (“P-type”), exclusively due to Ca2+ release from intracellular stores. Conversely, astrocytes stimulated by GO electrodes show slower and sustained Ca2+ response (“S-type”), largely mediated by external Ca2+ influx through specific ion channels. Astrocytes respond faster than neurons and activate distinct G-Protein Coupled Receptor intracellular signalling pathways. We propose a resistive/insulating model, hypothesizing that the different conductivity of the substrate influences the electric field at the cell/electrolyte or cell/material interfaces, favouring, respectively, the Ca2+ release from intracellular stores or the extracellular Ca2+ influx. This research provides a simple tool to selectively control distinct Ca2+ signals in brain astrocytes in neuroscience and bioelectronic medicine.
Resumo:
Oxides RNiO(3) (R - rare-earth, R not equal La) exhibit a metal-insulator (MI) transition at a temperature T(MI) and an antiferromagnetic (AF) transition at T(N). Specific heat (C(P)) and anelastic spectroscopy measurements were performed in samples of Nd(1-x)Eu(x)NiO(3), 0 <= x <= 0.35. For x - 0, a peak in C(P) is observed upon cooling and warming at essentially the same temperature T(MI) - T(N) similar to 195 K, although the cooling peak is much smaller. For x >= 0.25, differences between the cooling and warming curves are negligible, and two well defined peaks are clearly observed: one at lower temperatures that define T(N), and the other one at T(MI). An external magnetic field of 9 T had no significant effect on these results. The elastic compliance (s) and the reciprocal of the mechanical quality factor (Q(-1)) of NdNiO(3), measured upon warming, showed a very sharp peak at essentially the same temperature obtained from C(P), and no peak is observed upon cooling. The elastic modulus hardens below T(MI) much more sharply upon warming, while the cooling and warming curves are reproducible above T(MI). Conversely, for the sample with x - 0.35, s and Q(-1) curves are very similar upon warming and cooling. The results presented here give credence to the proposition that the MI phase transition changes from first to second order with increasing Eu doping. (C) 2011 American Institute of Physics. [doi: 10.1063/1.3549615]
Resumo:
Squamous differentiation of keratinocytes is associated with decreases in E2F-1 mRNA expression and E2F activity, and these processes are disrupted in squamous cell carcinoma cell lines. We now show that E2F-1 mRNA expression is increased in primary squamous cell carcinomas of the skin relative to normal epidermis, To explore the relationship between E2F-1 and squamous differentiation further, we examined the effect of altering E2F activity in primary human keratinocytes induced to differentiate. Promoter activity for the proliferation-associated genes, cdc2 and keratin 14, are inhibited during squamous differentiation. This inhibition can be inhibited by overexpression of E2F-1 in keratinocytes, Overexpression of E2F-1 also suppressed the expression of differentiation markers (transglutaminase type 1 and keratin 10) in differentiated keratinocytes, Blocking E2F activity by transfecting proliferating keratinocytes with dominant negative E2F-1 constructs inhibited the expression of cdc2 and E2F-1, but did not induce differentiation. Furthermore, expression of the dominant negative construct in epithelial carcinoma cell lines and normal keratinocytes decreased expression from the cdc2 promoter. These data indicate that E2F-1 promotes keratinocyte proliferation-specific marker genes and suppresses squamous differentiation-specific marker genes. Moreover, these data indicate that targeted disruption of E2F-1 activity may have therapeutic potential for the treatment of squamous carcinomas.
Resumo:
We have generated transgenic mice that harbor a 140 kb genomic fragment of the human BRCA1 locus (TgN.BRCA1(GEN)). We find that the transgene directs appropriate expression of human BRCA1 transcripts in multiple mouse tissues, and that human BRCA1 protein is expressed and stabilized following exposure to DIVA damage, Such mice are completely normal, with no overt signs of BRCA1 toxicity commonly observed when BRCA1 is expressed from heterologous promoters. Most importantly, however, the transgene rescues the otherwise lethal phenotype associated with the targeted hypomorphic allele (Brca1(Delta exIISA)). Brca1(-/-); TgN.BRCA1(GEN) bigenic animals develop normally and can be maintained as a distinct line. These results show that a 140 kb fragment of chromosome 17 contains all elements necessary for the correct expression, localization, and function of the BRCA1 protein, Further, the model provides evidence that function and regulation of the human BRCA1 gene can be studied and manipulated in a genetically tractable mammalian system.
Resumo:
The movement of chemicals through the soil to the groundwater or discharged to surface waters represents a degradation of these resources. In many cases, serious human and stock health implications are associated with this form of pollution. The chemicals of interest include nutrients, pesticides, salts, and industrial wastes. Recent studies have shown that current models and methods do not adequately describe the leaching of nutrients through soil, often underestimating the risk of groundwater contamination by surface-applied chemicals, and overestimating the concentration of resident solutes. This inaccuracy results primarily from ignoring soil structure and nonequilibrium between soil constituents, water, and solutes. A multiple sample percolation system (MSPS), consisting of 25 individual collection wells, was constructed to study the effects of localized soil heterogeneities on the transport of nutrients (NO3-, Cl-, PO43-) in the vadose zone of an agricultural soil predominantly dominated by clay. Very significant variations in drainage patterns across a small spatial scale were observed tone-way ANOVA, p < 0.001) indicating considerable heterogeneity in water flow patterns and nutrient leaching. Using data collected from the multiple sample percolation experiments, this paper compares the performance of two mathematical models for predicting solute transport, the advective-dispersion model with a reaction term (ADR), and a two-region preferential flow model (TRM) suitable for modelling nonequilibrium transport. These results have implications for modelling solute transport and predicting nutrient loading on a larger scale. (C) 2001 Elsevier Science Ltd. All rights reserved.