926 resultados para Business Value Two-Layer Model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Résumé: L’Institut pour l'étude de la neige et des avalanches en Suisse (SLF) a développé SNOWPACK, un modèle thermodynamique multi-couches de neige permettant de simuler les propriétés géophysiques du manteau neigeux (densité, température, taille de grain, teneur en eau, etc.) à partir desquelles un indice de stabilité est calculé. Il a été démontré qu’un ajustement de la microstructure serait nécessaire pour une implantation au Canada. L'objectif principal de la présente étude est de permettre au modèle SNOWPACK de modéliser de manière plus réaliste la taille de grain de neige et ainsi obtenir une prédiction plus précise de la stabilité du manteau neigeux à l’aide de l’indice basé sur la taille de grain, le Structural Stability Index (SSI). Pour ce faire, l’erreur modélisée (biais) par le modèle a été analysée à l’aide de données précises sur le terrain de la taille de grain à l’aide de l’instrument IRIS (InfraRed Integrated Sphere). Les données ont été recueillies durant l’hiver 2014 à deux sites différents au Canada : parc National des Glaciers, en Colombie-Britannique ainsi qu’au parc National de Jasper. Le site de Fidelity était généralement soumis à un métamorphisme à l'équilibre tandis que celui de Jasper à un métamorphisme cinétique plus prononcé. Sur chacun des sites, la stratigraphie des profils de densités ainsi des profils de taille de grain (IRIS) ont été complétés. Les profils de Fidelity ont été complétés avec des mesures de micropénétromètre (SMP). L’analyse des profils de densité a démontré une bonne concordance avec les densités modélisées (R[indice supérieur 2]=0.76) et donc la résistance simulée pour le SSI a été jugée adéquate. Les couches d’instabilités prédites par SNOWPACK ont été identifiées à l’aide de la variation de la résistance dans les mesures de SMP. L’analyse de la taille de grain optique a révélé une surestimation systématique du modèle ce qui est en accord avec la littérature. L’erreur de taille de grain optique dans un environnement à l’équilibre était assez constante tandis que l’erreur en milieux cinétique était plus variable. Finalement, une approche orientée sur le type de climat représenterait le meilleur moyen pour effectuer une correction de la taille de grain pour une évaluation de la stabilité au Canada.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose three research problems to explore the relations between trust and security in the setting of distributed computation. In the first problem, we study trust-based adversary detection in distributed consensus computation. The adversaries we consider behave arbitrarily disobeying the consensus protocol. We propose a trust-based consensus algorithm with local and global trust evaluations. The algorithm can be abstracted using a two-layer structure with the top layer running a trust-based consensus algorithm and the bottom layer as a subroutine executing a global trust update scheme. We utilize a set of pre-trusted nodes, headers, to propagate local trust opinions throughout the network. This two-layer framework is flexible in that it can be easily extensible to contain more complicated decision rules, and global trust schemes. The first problem assumes that normal nodes are homogeneous, i.e. it is guaranteed that a normal node always behaves as it is programmed. In the second and third problems however, we assume that nodes are heterogeneous, i.e, given a task, the probability that a node generates a correct answer varies from node to node. The adversaries considered in these two problems are workers from the open crowd who are either investing little efforts in the tasks assigned to them or intentionally give wrong answers to questions. In the second part of the thesis, we consider a typical crowdsourcing task that aggregates input from multiple workers as a problem in information fusion. To cope with the issue of noisy and sometimes malicious input from workers, trust is used to model workers' expertise. In a multi-domain knowledge learning task, however, using scalar-valued trust to model a worker's performance is not sufficient to reflect the worker's trustworthiness in each of the domains. To address this issue, we propose a probabilistic model to jointly infer multi-dimensional trust of workers, multi-domain properties of questions, and true labels of questions. Our model is very flexible and extensible to incorporate metadata associated with questions. To show that, we further propose two extended models, one of which handles input tasks with real-valued features and the other handles tasks with text features by incorporating topic models. Our models can effectively recover trust vectors of workers, which can be very useful in task assignment adaptive to workers' trust in the future. These results can be applied for fusion of information from multiple data sources like sensors, human input, machine learning results, or a hybrid of them. In the second subproblem, we address crowdsourcing with adversaries under logical constraints. We observe that questions are often not independent in real life applications. Instead, there are logical relations between them. Similarly, workers that provide answers are not independent of each other either. Answers given by workers with similar attributes tend to be correlated. Therefore, we propose a novel unified graphical model consisting of two layers. The top layer encodes domain knowledge which allows users to express logical relations using first-order logic rules and the bottom layer encodes a traditional crowdsourcing graphical model. Our model can be seen as a generalized probabilistic soft logic framework that encodes both logical relations and probabilistic dependencies. To solve the collective inference problem efficiently, we have devised a scalable joint inference algorithm based on the alternating direction method of multipliers. The third part of the thesis considers the problem of optimal assignment under budget constraints when workers are unreliable and sometimes malicious. In a real crowdsourcing market, each answer obtained from a worker incurs cost. The cost is associated with both the level of trustworthiness of workers and the difficulty of tasks. Typically, access to expert-level (more trustworthy) workers is more expensive than to average crowd and completion of a challenging task is more costly than a click-away question. In this problem, we address the problem of optimal assignment of heterogeneous tasks to workers of varying trust levels with budget constraints. Specifically, we design a trust-aware task allocation algorithm that takes as inputs the estimated trust of workers and pre-set budget, and outputs the optimal assignment of tasks to workers. We derive the bound of total error probability that relates to budget, trustworthiness of crowds, and costs of obtaining labels from crowds naturally. Higher budget, more trustworthy crowds, and less costly jobs result in a lower theoretical bound. Our allocation scheme does not depend on the specific design of the trust evaluation component. Therefore, it can be combined with generic trust evaluation algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To understand the evolution of bipedalism among the homnoids in an ecological context we need to be able to estimate theenerrgetic cost of locomotion in fossil forms. Ideally such an estimate would be based entirely on morphology since, except for the rare instances where footprints are preserved, this is hte only primary source of evidence available. In this paper we use evolutionary robotics techniques (genetic algoritms, pattern generators and mechanical modeling) to produce a biomimentic simulation of bipedalism based on human body dimensions. The mechnaical simulation is a seven-segment, two-dimensional model with motive force provided by tension generators representing the major muscle groups acting around the lower-limb joints. Metabolic energy costs are calculated from the muscel model, and bipedal gait is generated using a finite-state pattern generator whose parameters are produced using a genetic algorithm with locomotor economy (maximum distance for a fixed energy cost) as the fitness criterion. The model is validated by comparing the values it generates with those for modern humans. The result (maximum efficiency of 200 J m-1) is within 15% of the experimentally derived value, which is very encouraging and suggests that this is a useful analytic technique for investigating the locomotor behaviour of fossil forms. Initial work suggests that in the future this technique could be used to estimate other locomotor parameters such as top speed. In addition, the animations produced by this technique are qualitatively very convincing, which suggests that this may also be a useful technique for visualizing bipedal locomotion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This Ph.D. thesis contains 4 essays in mathematical finance with a focus on pricing Asian option (Chapter 4), pricing futures and futures option (Chapter 5 and Chapter 6) and time dependent volatility in futures option (Chapter 7). In Chapter 4, the applicability of the Albrecher et al.(2005)'s comonotonicity approach was investigated in the context of various benchmark models for equities and com- modities. Instead of classical Levy models as in Albrecher et al.(2005), the focus is the Heston stochastic volatility model, the constant elasticity of variance (CEV) model and the Schwartz (1997) two-factor model. It is shown that the method delivers rather tight upper bounds for the prices of Asian Options in these models and as a by-product delivers super-hedging strategies which can be easily implemented. In Chapter 5, two types of three-factor models were studied to give the value of com- modities futures contracts, which allow volatility to be stochastic. Both these two models have closed-form solutions for futures contracts price. However, it is shown that Model 2 is better than Model 1 theoretically and also performs very well empiri- cally. Moreover, Model 2 can easily be implemented in practice. In comparison to the Schwartz (1997) two-factor model, it is shown that Model 2 has its unique advantages; hence, it is also a good choice to price the value of commodity futures contracts. Fur- thermore, if these two models are used at the same time, a more accurate price for commodity futures contracts can be obtained in most situations. In Chapter 6, the applicability of the asymptotic approach developed in Fouque et al.(2000b) was investigated for pricing commodity futures options in a Schwartz (1997) multi-factor model, featuring both stochastic convenience yield and stochastic volatility. It is shown that the zero-order term in the expansion coincides with the Schwartz (1997) two-factor term, with averaged volatility, and an explicit expression for the first-order correction term is provided. With empirical data from the natural gas futures market, it is also demonstrated that a significantly better calibration can be achieved by using the correction term as compared to the standard Schwartz (1997) two-factor expression, at virtually no extra effort. In Chapter 7, a new pricing formula is derived for futures options in the Schwartz (1997) two-factor model with time dependent spot volatility. The pricing formula can also be used to find the result of the time dependent spot volatility with futures options prices in the market. Furthermore, the limitations of the method that is used to find the time dependent spot volatility will be explained, and it is also shown how to make sure of its accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Part 10: Sustainability and Trust

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Energy Conservation Measure (ECM) project selection is made difficult given real-world constraints, limited resources to implement savings retrofits, various suppliers in the market and project financing alternatives. Many of these energy efficient retrofit projects should be viewed as a series of investments with annual returns for these traditionally risk-averse agencies. Given a list of ECMs available, federal, state and local agencies must determine how to implement projects at lowest costs. The most common methods of implementation planning are suboptimal relative to cost. Federal, state and local agencies can obtain greater returns on their energy conservation investment over traditional methods, regardless of the implementing organization. This dissertation outlines several approaches to improve the traditional energy conservations models. Any public buildings in regions with similar energy conservation goals in the United States or internationally can also benefit greatly from this research. Additionally, many private owners of buildings are under mandates to conserve energy e.g., Local Law 85 of the New York City Energy Conservation Code requires any building, public or private, to meet the most current energy code for any alteration or renovation. Thus, both public and private stakeholders can benefit from this research. The research in this dissertation advances and presents models that decision-makers can use to optimize the selection of ECM projects with respect to the total cost of implementation. A practical application of a two-level mathematical program with equilibrium constraints (MPEC) improves the current best practice for agencies concerned with making the most cost-effective selection leveraging energy services companies or utilities. The two-level model maximizes savings to the agency and profit to the energy services companies (Chapter 2). An additional model presented leverages a single congressional appropriation to implement ECM projects (Chapter 3). Returns from implemented ECM projects are used to fund additional ECM projects. In these cases, fluctuations in energy costs and uncertainty in the estimated savings severely influence ECM project selection and the amount of the appropriation requested. A risk aversion method proposed imposes a minimum on the number of “of projects completed in each stage. A comparative method using Conditional Value at Risk is analyzed. Time consistency was addressed in this chapter. This work demonstrates how a risk-based, stochastic, multi-stage model with binary decision variables at each stage provides a much more accurate estimate for planning than the agency’s traditional approach and deterministic models. Finally, in Chapter 4, a rolling-horizon model allows for subadditivity and superadditivity of the energy savings to simulate interactive effects between ECM projects. The approach makes use of inequalities (McCormick, 1976) to re-express constraints that involve the product of binary variables with an exact linearization (related to the convex hull of those constraints). This model additionally shows the benefits of learning between stages while remaining consistent with the single congressional appropriations framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Transportation system resilience has been the subject of several recent studies. To assess the resilience of a transportation network, however, it is essential to model its interactions with and reliance on other lifelines. In this work, a bi-level, mixed-integer, stochastic program is presented for quantifying the resilience of a coupled traffic-power network under a host of potential natural or anthropogenic hazard-impact scenarios. A two-layer network representation is employed that includes details of both systems. Interdependencies between the urban traffic and electric power distribution systems are captured through linking variables and logical constraints. The modeling approach was applied on a case study developed on a portion of the signalized traffic-power distribution system in southern Minneapolis. The results of the case study show the importance of explicitly considering interdependencies between critical infrastructures in transportation resilience estimation. The results also provide insights on lifeline performance from an alternative power perspective.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Au cours des vingt dernières années, l’anesthésie régionale est devenue, autant en médecine vétérinaire qu’humaine, un outil essentiel à l’élaboration de protocoles analgésiques péri-opératoires. Parmi l’éventail de techniques mises au point en anesthésie canine, le bloc paravertébral du plexus vertébral (PBPB) et sa version modifiée sont d’un grand intérêt pour toute procédure du membre thoracique, dans sa portion proximale. Toutefois, l’essentiel des données publiées à ce jour provient d’études colorimétriques, sans évaluation clinique, et peu d’information est disponible sur les techniques de localisation nerveuse envisageables à ce site. Notre étude visait à décrire une approche échoguidée du PBPB modifié, puis à caractériser ses paramètres pharmacocinétiques et pharmacodynamiques après administration de lidocaïne (LI) ou lidocaïne adrénalinée (LA). Huit chiens ont été inclus dans un protocole prospectif, randomisé, en aveugle et croisé, réparti sur trois périodes. L’impact pharmacodynamique du bloc effectué avec LI ou LA a été évalué régulièrement pour 180 min suivant son exécution. Le traitement à l’adrénaline n’a pas démontré d’impact significatif (P = 0,845) sur la durée du bloc sensitif, tel qu’évalué par un stimulus douloureux mécanique appliqué aux dermatomes ciblés. À l’opposé, l’atteinte proprioceptive évaluée par la démarche a été trouvée prolongée (P = 0,027) et le bloc moteur mesuré par le pic de force verticale (PVF) au trot sur la plaque de force s’est avéré plus marqué (PVF réduit; P = 0,007) sous LA. À l’arrêt comme au trot, le nadir de la courbe PVF-temps a été trouvé retardé (P < 0,005) et la pente ascendante de retour aux valeurs normales adoucie (P = 0,005). Parallèlement aux évaluations cliniques, des échantillons plasmatiques ont été collectés régulièrement afin de quantifier et décrire le devenir pharmacocinétique de la lidocaïne. Parmi les trois élaborés, un modèle bi-compartimental doté d’une double absorption asynchrone d’ordre zéro a finalement été sélectionné et appliqué aux données expérimentales. Sous LA, la Cmax a été trouvée significativement diminuée (P < 0,001), les phases d’absorption prolongées [P < 0,020 (Dur1) et P < 0,001 (Dur2)] et leurs constantes réduites [P = 0,046(k01) et P < 0,001 (k02)], le tout en concordance avec les effets proprioceptifs et moteurs rapportés. Bien que l’extrapolation du dosage soit maintenant théoriquement envisageable à partir du modèle mis en lumière ici, des études supplémentaires sont encore nécessaires afin d’établir un protocole de PBPB d’intérêt clinique. L’analyse sur plaque de force pourrait alors devenir un outil de choix pour évaluer l’efficacité du bloc dans un cadre expérimental.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A tanulmány a vállalati működés során megjelenő környezetvédelmi intézkedések vállalaton belüli hasznainak rendszerezésével foglalkozik. A hagyományos vállalati számviteli rendszerben a környezetvédelem alapvetően a költségoldalon jelenik meg. Azonban a környezetvédelmi tevékenység vállalaton belül megjelenő hasznainak is fontos szerepe van környezetvédelmi intézkedésekkel, beruházásokkal kapcsolatos döntések meghozatalában. Ennek ellenére a környezeti hasznok a szakirodalomban elnagyoltan, valamint nem a teljes vállalatra vonatkozóan jelennek meg. Ezért a tanulmány célja, hogy a környezeti hasznok számviteli rendszerben való kimutatására egyfajta megoldást keressen. Az elemzés eredménye egy olyan új modell felállítása, amely képes a tulajdonosi érték koncepciójához szorosan kapcsolódva a vállalaton belüli környezeti hasznokat átfogóan kimutatni. Az újonnan felállított modell a hazai és a nemzetközi gyakorlatban is újdonságértékkel bír, és nagy előnye a vállalati gyakorlatban való alkalmazhatóság. --------- The main focus of this paper is the systematization of the environmental benefits within the company. The environmental benefits related to the company’s environmental action. Environmental protection appears in the accounting system mostly on the cost side. However, environmental benefits incurring within the company play an important role in the company’s environmental decision making processes. Despite these facts, environmental benefits are not analyzed in detail by the accounting literature and do not cover the whole activity of the company. Consequently, these paper aims to gain a possible demonstration of environmental benefits in the accounting system. The main outcome of this paper is a new model, able to comprehensively present the environmental benefits of the whole company. The theory on which the model is based is the shareholders’ value concept. The model has significant novelty both in the Hungarian and international practice. Another advantage of the model is that it can be integrated into the company’s accounting system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Au cours des vingt dernières années, l’anesthésie régionale est devenue, autant en médecine vétérinaire qu’humaine, un outil essentiel à l’élaboration de protocoles analgésiques péri-opératoires. Parmi l’éventail de techniques mises au point en anesthésie canine, le bloc paravertébral du plexus vertébral (PBPB) et sa version modifiée sont d’un grand intérêt pour toute procédure du membre thoracique, dans sa portion proximale. Toutefois, l’essentiel des données publiées à ce jour provient d’études colorimétriques, sans évaluation clinique, et peu d’information est disponible sur les techniques de localisation nerveuse envisageables à ce site. Notre étude visait à décrire une approche échoguidée du PBPB modifié, puis à caractériser ses paramètres pharmacocinétiques et pharmacodynamiques après administration de lidocaïne (LI) ou lidocaïne adrénalinée (LA). Huit chiens ont été inclus dans un protocole prospectif, randomisé, en aveugle et croisé, réparti sur trois périodes. L’impact pharmacodynamique du bloc effectué avec LI ou LA a été évalué régulièrement pour 180 min suivant son exécution. Le traitement à l’adrénaline n’a pas démontré d’impact significatif (P = 0,845) sur la durée du bloc sensitif, tel qu’évalué par un stimulus douloureux mécanique appliqué aux dermatomes ciblés. À l’opposé, l’atteinte proprioceptive évaluée par la démarche a été trouvée prolongée (P = 0,027) et le bloc moteur mesuré par le pic de force verticale (PVF) au trot sur la plaque de force s’est avéré plus marqué (PVF réduit; P = 0,007) sous LA. À l’arrêt comme au trot, le nadir de la courbe PVF-temps a été trouvé retardé (P < 0,005) et la pente ascendante de retour aux valeurs normales adoucie (P = 0,005). Parallèlement aux évaluations cliniques, des échantillons plasmatiques ont été collectés régulièrement afin de quantifier et décrire le devenir pharmacocinétique de la lidocaïne. Parmi les trois élaborés, un modèle bi-compartimental doté d’une double absorption asynchrone d’ordre zéro a finalement été sélectionné et appliqué aux données expérimentales. Sous LA, la Cmax a été trouvée significativement diminuée (P < 0,001), les phases d’absorption prolongées [P < 0,020 (Dur1) et P < 0,001 (Dur2)] et leurs constantes réduites [P = 0,046(k01) et P < 0,001 (k02)], le tout en concordance avec les effets proprioceptifs et moteurs rapportés. Bien que l’extrapolation du dosage soit maintenant théoriquement envisageable à partir du modèle mis en lumière ici, des études supplémentaires sont encore nécessaires afin d’établir un protocole de PBPB d’intérêt clinique. L’analyse sur plaque de force pourrait alors devenir un outil de choix pour évaluer l’efficacité du bloc dans un cadre expérimental.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

IS/IT investments are seen has having an enormous potential impact on the competitive position of the firm, on its performance, and demand an active and motivated participation of several stakeholder groups. The shortfall of evidence concerning the productivity of IT became known as the ‘productivity paradox’. As Robert Solow, the Nobel laureate economist stated “we see computers everywhere except in the productivity statistics”. An important stream of research conducted all over the world has tried to understand these phenomena, called in the literature as «IS business value» field. However, there is a gap in the literature, addressing the Portuguese situation. No empirical work has been done to date in order to understand the impact of Information Technology adoption on the productivity of those firms. Using data from two surveys conducted by the Portuguese National Institute of Statistics (INE), Inquiry to the use of IT by Portuguese companies (IUTIC) and the Inquiry Harmonized to (Portuguese) companies (accounting data), this study relates (using regression analysis) the amounts spent on IT with the financial performance indicator Returns on Equity, as a proxy of firm productivity, of Portuguese companies with more than 250 employees. The aim of this paper is to shed light on the Portuguese situation concerning the impact of IS/IT on the productivity of Portuguese top companies. Empirically, we test the impact of IT expenditure on firm productivity of a sample of Portuguese large companies. Our results, based on firm-level data on Information Technology expenditure and firm productivity as measured by return on equity (1186 observations) for the years of 2003 and 2004, exhibit a negative impact of IT expenditure on firm productivity, in line with “productivity paradox” claimants.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dados de bovinos compostos foram analisados para avaliar o efeito da epistasia nos modelos de avaliação genética. As características analisadas foram os pesos aos 205 (P205) e 390 dias (P390) e perímetro escrotal aos 390 dias (PE390). As análises foram realizadas pela metodologia de máxima verossimilhança considerando-se dois modelos: o modelo 1 incluiu como covariáveis os efeitos aditivos diretos e maternos, e os não aditivos das heterozigoses para os efeitos diretos e para o materno total, e o modelo 2 considerou também o efeito direto de epistasia. Para comparação dos modelos, foram utilizados o critério de informação de Akaike (AIC) e o critério de informação Bayesiano de Schwartz (BIC), e o teste de razão de verossimilhança. A inclusão da epistasia no modelo de avaliação genética pouco alterou as estimativas de componentes de (co)variâncias genéticas aditivas e, consequentemente, as herdabilidades. O teste de verossimilhança e o critério de Akaike sugeriram que o modelo 2, que inclui a epistasia, apresentou maior aderência aos dados para todas as características analisadas. O critério BIC indicou este modelo como o melhor apenas para P205. Para análise genética dessa população, o modelo que considerou o efeito de epistasia foi o mais adequado.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe the concept, the fabrication, and the most relevant properties of a piezoelectric-polymer system: Two fluoroethylenepropylene (FEP) films with good electret properties are laminated around a specifically designed and prepared polytetrafluoroethylene (PTFE) template at 300 degrees C. After removing the PTFE template, a two-layer FEP film with open tubular channels is obtained. For electric charging, the two-layer FEP system is subjected to a high electric field. The resulting dielectric barrier discharges inside the tubular channels yield a ferroelectret with high piezoelectricity. d(33) coefficients of up to 160 pC/N have already been achieved on the ferroelectret films. After charging at suitable elevated temperatures, the piezoelectricity is stable at temperatures of at least 130 degrees C. Advantages of the transducer films include ease of fabrication at laboratory or industrial scales, a wide range of possible geometrical and processing parameters, straightforward control of the uniformity of the polymer system, flexibility, and versatility of the soft ferroelectrets, and a large potential for device applications e.g., in the areas of biomedicine, communications, production engineering, sensor systems, environmental monitoring, etc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rheological properties of adherent cells are essential for their physiological functions, and microrheological measurements on living cells have shown that their viscoelastic responses follow a weak power law over a wide range of time scales. This power law is also influenced by mechanical prestress borne by the cytoskeleton, suggesting that cytoskeletal prestress determines the cell's viscoelasticity, but the biophysical origins of this behavior are largely unknown. We have recently developed a stochastic two-dimensional model of an elastically joined chain that links the power-law rheology to the prestress. Here we use a similar approach to study the creep response of a prestressed three-dimensional elastically jointed chain as a viscoelastic model of semiflexible polymers that comprise the prestressed cytoskeletal lattice. Using a Monte Carlo based algorithm, we show that numerical simulations of the chain's creep behavior closely correspond to the behavior observed experimentally in living cells. The power-law creep behavior results from a finite-speed propagation of free energy from the chain's end points toward the center of the chain in response to an externally applied stretching force. The property that links the power law to the prestress is the chain's stiffening with increasing prestress, which originates from entropic and enthalpic contributions. These results indicate that the essential features of cellular rheology can be explained by the viscoelastic behaviors of individual semiflexible polymers of the cytoskeleton.