ResearchPad - General Mathematics https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[Climate risk index for Italy]]> https://www.researchpad.co/product?articleinfo=5b594ca3463d7e5ad39d04c0

We describe a climate risk index that has been developed to inform national climate adaptation planning in Italy and that is further elaborated in this paper. The index supports national authorities in designing adaptation policies and plans, guides the initial problem formulation phase, and identifies administrative areas with higher propensity to being adversely affected by climate change. The index combines (i) climate change-amplified hazards; (ii) high-resolution indicators of exposure of chosen economic, social, natural and built- or manufactured capital (MC) assets and (iii) vulnerability, which comprises both present sensitivity to climate-induced hazards and adaptive capacity. We use standardized anomalies of selected extreme climate indices derived from high-resolution regional climate model simulations of the EURO-CORDEX initiative as proxies of climate change-altered weather and climate-related hazards. The exposure and sensitivity assessment is based on indicators of manufactured, natural, social and economic capital assets exposed to and adversely affected by climate-related hazards. The MC refers to material goods or fixed assets which support the production process (e.g. industrial machines and buildings); Natural Capital comprises natural resources and processes (renewable and non-renewable) producing goods and services for well-being; Social Capital (SC) addressed factors at the individual (people's health, knowledge, skills) and collective (institutional) level (e.g. families, communities, organizations and schools); and Economic Capital (EC) includes owned and traded goods and services. The results of the climate risk analysis are used to rank the subnational administrative and statistical units according to the climate risk challenges, and possibly for financial resource allocation for climate adaptation.

This article is part of the theme issue ‘Advances in risk assessment for climate change adaptation policy’.

]]>
<![CDATA[An investigation into inflection-point instability in the entrance region of a pulsating pipe flow]]> https://www.researchpad.co/product?articleinfo=5b37280a463d7e652cbde15a

This paper investigates the inflection-point instability that governs the flow disturbance initiated in the entrance region of a pulsating pipe flow. Under such a flow condition, the flow instability grows within a certain phase region in a pulsating cycle, during which the inflection point in the unsteady mean flow lifts away from the viscous effect-dominated region known as the Stokes layer. The characteristic frequency of the instability is found to be in agreement with that predicted by the mixing-layer model. In comparison with those cases not falling in this category, it is further verified that the flow phenomenon will take place only if the inflection point lifts away sufficiently from the Stokes layer.

]]>
<![CDATA[Making a meaningful impact: modelling simultaneous frictional collisions in spatial multibody systems]]> https://www.researchpad.co/product?articleinfo=5bce08dd40307c536040dfb4 <![CDATA[Thickness-dependent electrocaloric effect in mixed-phase Pb0.87Ba0.1 La0.02(Zr0.6Sn0.33Ti0.07)O3 thin films]]> https://www.researchpad.co/product?articleinfo=5bcdd90640307c42a330290b <![CDATA[A universal metric for ferroic energy materials]]> https://www.researchpad.co/product?articleinfo=5bcdd90040307c42a3302909 <![CDATA[Arts of electrical impedance tomographic sensing]]> https://www.researchpad.co/product?articleinfo=5bcbfb7340307c58a99fb39f <![CDATA[Alterations in the coupling functions between cortical and cardio-respiratory oscillations due to anaesthesia with propofol and sevoflurane]]> https://www.researchpad.co/product?articleinfo=5bca397040307c7a3ccdeaff <![CDATA[Communication networks beyond the capacity crunch]]> https://www.researchpad.co/product?articleinfo=5bc7cae440307c5a700cb4ab

This issue of Philosophical Transactions of the Royal Society, Part A represents a summary of the recent discussion meeting ‘Communication networks beyond the capacity crunch’. The purpose of the meeting was to establish the nature of the capacity crunch, estimate the time scales associated with it and to begin to find solutions to enable continued growth in a post-crunch era. The meeting confirmed that, in addition to a capacity shortage within a single optical fibre, many other ‘crunches’ are foreseen in the field of communications, both societal and technical. Technical crunches identified included the nonlinear Shannon limit, wireless spectrum, distribution of 5G signals (front haul and back haul), while societal influences included net neutrality, creative content generation and distribution and latency, and finally energy and cost. The meeting concluded with the observation that these many crunches are genuine and may influence our future use of technology, but encouragingly noted that research and business practice are already moving to alleviate many of the negative consequences.

]]>
<![CDATA[Maximizing the optical network capacity]]> https://www.researchpad.co/product?articleinfo=5bc7cadc40307c5a700cb4a8

Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity.

]]>
<![CDATA[New optical fibres for high-capacity optical communications]]> https://www.researchpad.co/product?articleinfo=5bc7cadf40307c5a700cb4a9

Researchers are within a factor of 2 or so from realizing the maximum practical transmission capacity of conventional single-mode fibre transmission technology. It is therefore timely to consider new technological approaches offering the potential for more cost-effective scaling of network capacity than simply installing more and more conventional single-mode systems in parallel. In this paper, I review physical layer options that can be considered to address this requirement including the potential for reduction in both fibre loss and nonlinearity for single-mode fibres, the development of ultra-broadband fibre amplifiers and finally the use of space division multiplexing.

]]>
<![CDATA[From photons to big-data applications: terminating terabits]]> https://www.researchpad.co/product?articleinfo=5bc7cae240307c5a700cb4aa

Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers.

]]>
<![CDATA[Once the Internet can measure itself]]> https://www.researchpad.co/product?articleinfo=5bc7cad940307c5a700cb4a7

In communications, the obstacle to high bandwidth and reliable transmission is usually the interconnections, not the links. Nowhere is this more evident than on the Internet, where broadband connections to homes, offices and now mobile smart phones are a frequent source of frustration, and the interconnections between the roughly 50 000 subnetworks (autonomous systems or ASes) from which it is formed, even more so. The structure of the AS graph that is formed by these interconnections is unspecified, undocumented and only guessed-at through measurement, but it shows surprising efficiencies. Under recent pressures for network neutrality and openness or ‘transparency’, operators, several classes of users and regulatory bodies have a good chance of realizing these efficiencies, but they need improved measurement technology to manage this under continued growth. A long-standing vision, an Internet that measures itself, in which every intelligent port takes a part in monitoring, can make this possible and may now be within reach.

]]>
<![CDATA[The effect of Mg location on Co-Mg-Ru/γ-Al 2 O 3 Fischer–Tropsch catalysts]]> https://www.researchpad.co/product?articleinfo=5bc7983640307c3df00b4066

The effectiveness of Mg as a promoter of Co-Ru/γ-Al2O3 Fischer–Tropsch catalysts depends on how and when the Mg is added. When the Mg is impregnated into the support before the Co and Ru addition, some Mg is incorporated into the support in the form of MgxAl2O3+x if the material is calcined at 550°C or 800°C after the impregnation, while the remainder is present as amorphous MgO/MgCO3 phases. After subsequent Co-Ru impregnation MgxCo3−xO4 is formed which decomposes on reduction, leading to Co(0) particles intimately mixed with Mg, as shown by high-resolution transmission electron microscopy. The process of impregnating Co into an Mg-modified support results in dissolution of the amorphous Mg, and it is this Mg which is then incorporated into MgxCo3−xO4. Acid washing or higher temperature calcination after Mg impregnation can remove most of this amorphous Mg, resulting in lower values of x in MgxCo3−xO4. Catalytic testing of these materials reveals that Mg incorporation into the Co oxide phase is severely detrimental to the site-time yield, while Mg incorporation into the support may provide some enhancement of activity at high temperature.

]]>
<![CDATA[Catalysts for CO2/epoxide ring-opening copolymerization]]> https://www.researchpad.co/product?articleinfo=5bc7983440307c3df00b4065

This article summarizes and reviews recent progress in the development of catalysts for the ring-opening copolymerization of carbon dioxide and epoxides. The copolymerization is an interesting method to add value to carbon dioxide, including from waste sources, and to reduce pollution associated with commodity polymer manufacture. The selection of the catalyst is of critical importance to control the composition, properties and applications of the resultant polymers. This review highlights and exemplifies some key recent findings and hypotheses, in particular using examples drawn from our own research.

]]>
<![CDATA[Comparing quantum versus Markov random walk models of judgements measured by rating scales]]> https://www.researchpad.co/product?articleinfo=5bc6427b40307c1f37fac22b

Quantum and Markov random walk models are proposed for describing how people evaluate stimuli using rating scales. To empirically test these competing models, we conducted an experiment in which participants judged the effectiveness of public health service announcements from either their own personal perspective or from the perspective of another person. The order of the self versus other judgements was manipulated, which produced significant sequential effects. The quantum and Markov models were fitted to the data using the same number of parameters, and the model comparison strongly supported the quantum over the Markov model.

]]>
<![CDATA[Team decision problems with classical and quantum signals]]> https://www.researchpad.co/product?articleinfo=5bc6427840307c1f37fac22a

We study team decision problems where communication is not possible, but coordination among team members can be realized via signals in a shared environment. We consider a variety of decision problems that differ in what team members know about one another's actions and knowledge. For each type of decision problem, we investigate how different assumptions on the available signals affect team performance. Specifically, we consider the cases of perfectly correlated, i.i.d., and exchangeable classical signals, as well as the case of quantum signals. We find that, whereas in perfect-recall trees (Kuhn 1950 Proc. Natl Acad. Sci. USA 36, 570–576; Kuhn 1953 In Contributions to the theory of games, vol. II (eds H Kuhn, A Tucker), pp. 193–216) no type of signal improves performance, in imperfect-recall trees quantum signals may bring an improvement. Isbell (Isbell 1957 In Contributions to the theory of games, vol. III (eds M Drescher, A Tucker, P Wolfe), pp. 79–96) proved that, in non-Kuhn trees, classical i.i.d. signals may improve performance. We show that further improvement may be possible by use of classical exchangeable or quantum signals. We include an example of the effect of quantum signals in the context of high-frequency trading.

]]>
<![CDATA[Unlocking the potential of supported liquid phase catalysts with supercritical fluids: low temperature continuous flow catalysis with integrated product separation]]> https://www.researchpad.co/product?articleinfo=5bc59a9040307c477ce01a23

Solution-phase catalysis using molecular transition metal complexes is an extremely powerful tool for chemical synthesis and a key technology for sustainable manufacturing. However, as the reaction complexity and thermal sensitivity of the catalytic system increase, engineering challenges associated with product separation and catalyst recovery can override the value of the product. This persistent downstream issue often renders industrial exploitation of homogeneous catalysis uneconomical despite impressive batch performance of the catalyst. In this regard, continuous-flow systems that allow steady-state homogeneous turnover in a stationary liquid phase while at the same time effecting integrated product separation at mild process temperatures represent a particularly attractive scenario. While continuous-flow processing is a standard procedure for large volume manufacturing, capitalizing on its potential in the realm of the molecular complexity of organic synthesis is still an emerging area that requires innovative solutions. Here we highlight some recent developments which have succeeded in realizing such systems by the combination of near- and supercritical fluids with homogeneous catalysts in supported liquid phases. The cases discussed exemplify how all three levels of continuous-flow homogeneous catalysis (catalyst system, separation strategy, process scheme) must be matched to locate viable process conditions.

]]>
<![CDATA[Electrochemistry in supercritical fluids]]> https://www.researchpad.co/product?articleinfo=5bc59a9340307c477ce01a24

A wide range of supercritical fluids (SCFs) have been studied as solvents for electrochemistry with carbon dioxide and hydrofluorocarbons (HFCs) being the most extensively studied. Recent advances have shown that it is possible to get well-resolved voltammetry in SCFs by suitable choice of the conditions and the electrolyte. In this review, we discuss the voltammetry obtained in these systems, studies of the double-layer capacitance, work on the electrodeposition of metals into high aspect ratio nanopores and the use of metallocenes as redox probes and standards in both supercritical carbon dioxide–acetonitrile and supercritical HFCs.

]]>
<![CDATA[When does a physical system compute?]]> https://www.researchpad.co/product?articleinfo=5ba6b08040307c29fb0f9b26

Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a ‘computational entity’, and its critical role in defining when computing is taking place in physical systems.

]]>
<![CDATA[Addressing model error through atmospheric stochastic physical parametrizations: impact on the coupled ECMWF seasonal forecasting system]]> https://www.researchpad.co/product?articleinfo=5ba4ebe040307c691eb552b9

The finite resolution of general circulation models of the coupled atmosphere–ocean system and the effects of sub-grid-scale variability present a major source of uncertainty in model simulations on all time scales. The European Centre for Medium-Range Weather Forecasts has been at the forefront of developing new approaches to account for these uncertainties. In particular, the stochastically perturbed physical tendency scheme and the stochastically perturbed backscatter algorithm for the atmosphere are now used routinely for global numerical weather prediction. The European Centre also performs long-range predictions of the coupled atmosphere–ocean climate system in operational forecast mode, and the latest seasonal forecasting system—System 4—has the stochastically perturbed tendency and backscatter schemes implemented in a similar way to that for the medium-range weather forecasts. Here, we present results of the impact of these schemes in System 4 by contrasting the operational performance on seasonal time scales during the retrospective forecast period 1981–2010 with comparable simulations that do not account for the representation of model uncertainty. We find that the stochastic tendency perturbation schemes helped to reduce excessively strong convective activity especially over the Maritime Continent and the tropical Western Pacific, leading to reduced biases of the outgoing longwave radiation (OLR), cloud cover, precipitation and near-surface winds. Positive impact was also found for the statistics of the Madden–Julian oscillation (MJO), showing an increase in the frequencies and amplitudes of MJO events. Further, the errors of El Niño southern oscillation forecasts become smaller, whereas increases in ensemble spread lead to a better calibrated system if the stochastic tendency is activated. The backscatter scheme has overall neutral impact. Finally, evidence for noise-activated regime transitions has been found in a cluster analysis of mid-latitude circulation regimes over the Pacific–North America region.

]]>