2D Cold-Ion Equations, Discontinuities in its Solutions
The properties of the solution of 2D Cold-Ion Equations with
discontinuities in the initial conditions are studied. In the work of Perego,
et al*, it was shown that the solution of the 1D problem with discontinuous
initial conditions contains an infinite discontinuity that propagates
throughout the domain. The results in the 1D problem can be generalized
to 2D. Various conditions for cold-ion plasma flow are tested. A discussion
of the numerical method and results will be included in the talk.
*Perego, M., et al.
"The expansion of a collisionless plasma into a plasma of lower density."
Phys. Plasmas 20.5 (2013)
The Computational Complexity of Stochastic Galerkin and Collocation Methods for PDEs with Random Coefficients
We developed a rigorous cost metric, used to compare the computational complexity of a general class of stochastic Galerkin methods and stochastic collocation methods, when solving stochastic PDEs. Our approach allows us to calculate the cost of preconditioning both the Galerkin and collocation systems, as well as account for the sparsity of the Galerkin projection. Theoretical complexity estimates will also be presented and validated with use of several computational examples.
Simulating Vesicle-Substrate Adhesion Using Two Phase Field Functions
A phase field model for simulating the adhesion of a
cell membrane to a substrate is constructed. The model features two phase field functions which are used to simulate the membrane and the substrate. An energy model is defined which accounts for the elastic bending energy and the contact potential energy as well as, through a penalty method, vesicle volume and surface area constraints.
Numerical results are provided to verify our model and to provide visual illustrations of the interactions between a lipid vesicle and substrates having complex shapes.
A Multilevel Stochastic Collocation Method for SPDEs
Stochastic collocation methods for random partial differential equations suffer from the curse of dimensionality, whereby increases in the stochastic dimension cause an explosion of computational effort.
Multilevel methods for seek to decrease computational complexity by balancing spatial and stochastic discretization errors. In this talk, we present results for convergence and computation cost of a multilevel stochastic collocation method, demonstrating its advantages compared to standard single-level approximations, and highlighting conditions under which a sparse grid MLSC approach is preferable to MLMC.
Tangential Interpolation for Data Driven Model Reduction by The Eigensystem Realization Algorithm
Data driven model reduction is a powerful technique to design reduced models directly from input-output measurements of a dynamical system without direct access to internal dynamics. The Eigensystem Realization Algorithm (ERA) is a well known data-driven model reduction algorithm for system identification and model reduction of linear dynamical systems. However, for systems with many inputs and outputs and for slowly decaying dynamics, ERA may become unfeasible due to the need to compute the full SVD of a large-scale dense Hankel matrix. In this work, we present an algorithm that aims to resolve this computational bottleneck by employing tangential interpolation in the construction of the reduced model. A numerical example is presented.
Analysis of a time-dependent fluid-structure interaction problem
in an optimal control framework over a single time-step
Fluid-structure interaction simulation presents many computational difficulties, particularly when the densities of the fluid and structure are close. A previous report has suggested that recasting the FSI problem in the context of optimization may significantly reduce computation time. A modified control will be presented along with detailed analysis for the stability and existence of an optimal solution for a given time-step. The existence of Lagrangian multipliers is proved, along with the resulting optimality system. Numerical results from using a gradient based optimization algorithm will then be presented, confirming the effectiveness of optimization in computing an accurate solution.
Direct models for controlling large non-linear dynamical systems
using the underlying partial differential equation model are prohibitive and
thus require model reduction. Unfortunately, for these systems, current model
reduction techniques are either computationally intractable or are not guaranteed to preserve the necessary controllability and stability properties in the
reduced models. We seek to create a new technique by reducing the linear and
non-linear portions of the dynamical system separately in order to preserve
the necessary stability and controllability properties in the reduced model. In
our current work, we model a natural circulation cavity using the linearized
Boussinesq equations. The system is then reduced using Iterative Rational
Krylov Algorithm (IRKA). We present results on the control of the reduced
linear system produced using IRKA compared to the control of the full linear
Ensemble Simulation Models, Algorithms and Analysis for Flow Simulations
This talk will be aimed at a general audience including graduate students in mathematics (and related areas that deal with fluids in motion). It is based on joint work with Nan Jiang.
The problem of predicting fluids in motion is beset with difficulties. Inevitable small errors in data, geometry, parameterization and discretization grow exponentially in time with rate constant growing as the Reynolds number increases. (As an example, a 1cm sphere creeping at 1cm/sec through water already has Re = 100 and exp(+100t) rapidly becomes significant.) A standard method of increasing reliability of predictions, expanding the window of predictability and evaluating resulting uncertainty is through ensemble calculations. Unfortunately, computing flow ensembles immediately leads to the competition between high resolution single simulations and multiple ensemble runs. This is a boundary between what can be done and what cannot be done. It is also an area replete with high impact mathematics problems. This talk will describe the problem for fluids, present some methods of addressing it including some (simple but apparently new) ideas. For the fluids specialists, new ensemble turbulence models will be presented, leading to a new mixing length. A few simple (but not common in the mathematics literature) examples of how ensemble simulations can be used to interrogate flows will be shown.
The Robert-Asselin (RA) time filter combined with leapfrog scheme is widely used
in numerical models of weather and climate. It successfully suppresses the spurious
computational mode associated with the leapfrog method, but it also weakly dampens the
physical mode and degrades the numerical accuracy. The Robert-Asselin-Williams (RAW)
time filter is a modification of the RA filter that reduces the undesired numerical damping
of RA filter and increases the accuracy. We propose a higher-order Robert-Asselin (hoRA)
type time filter which effectively suppresses the computational modes and achieves third-
order accuracy with the same storage requirement as RAW filter. Like RA and RAW filters,
the hoRA filter is non-intrusive, and so it would be easily implementable. The leapfrog
scheme with hoRA filter is almost as accurate, stable and efficient as the intrusive third-
order Adams-Bashforth (AB3) method.
Many fields (such as stochastic partial differential equations and parameter estimation) give rise to high dimensional interpolation and integration problems. Naive methods (full tensor grids) require prohibitively many function evaluations, even for moderate dimensions. Sparse grids can solve the problem to a similar degree of accuracy with far fewer points by building high dimensional rules from selected tensor products of one dimensional rules. If the one dimensional rule captures local behaviors, then the number of evaluations needed can be further reduced by adaptively refining the grid. In this talk, we will give an overview of sparse grids and discuss the construction of interpolatory wavelets and their use in sparse grids.
Comparison of Sparse and Sauer Approaches to Multivariate Interpolation
Interpolation is a numerical tool that transforms data into a formula;
in one spatial dimension, it's easy to set up a Lagrange basis,
evaluate the interpolant, and estimate the error. But in higher
dimensions, it's not clear how to choose the points (if we are allowed
to), or to choose how many points we want, or how to construct
a Lagrange basis. Moreover, certain choices of points will cause
many algorithms to fail, or to produce interpolants in unusual
polynomial spaces. One common approach to multivariate interpolation
uses sparse grids, which are more typically applied to integration
problems. Current research suggests other ways of computing an interpolant
in a way that makes the Lagrange basis apparent, handles
unusual point arrangements correctly, and makes it possible to
analyze the error behavior.
One of the most popular courses offered by the FSU Department
of Scientific Computing is a class on game design, which is specifically
intended to attract students from other majors. Each student team
is selected to include programmers, artists, writers, and creative thinkers,
(not necessarily exclusive categories!) In this talk, I will discuss
the role of game design as a portal to careers in computational science,
present a computer game I have developed, and try to explain the
embarrassing fact that sometimes learning can be fun, and fun can
Stochastic sampling methods are arguably the most direct and least intrusive means of incorporating parametric uncertainty into numerical simulations of partial differential equations with random inputs. However, to achieve an overall error that is within a desired tolerance, a large number of sample simulations may be required (to control the sampling error), each of which may need to be run at high levels of spatial fidelity (to control the spatial error). Multilevel sampling methods aim to achieve the same accuracy as traditional sampling methods, but at a reduced computational cost, through the use of a hierarchy of spatial discretization models. Multilevel algorithms coordinate the number of samples needed at each discretization level by minimizing the computational cost, subject to a given error tolerance. They can be applied to a variety of sampling schemes, exploit nesting when available, can be implemented in parallel and can be used to inform adaptive spatial refinement strategies. We extend the multilevel sampling algorithm to sparse grid stochastic collocation methods, discuss its numerical implementation and demonstrate its efficiency both theoretically and by means of numerical examples.
Proper Orthogonal Decomposition (POD) is a common technique to produce Reduced Order Models (ROMs) from known datasets. They are particularly relevant to datasets obtained from finite element simulations, where terms in the ROM correspond to linear combinations of basis functions from the original finite element discretization. There are two different approaches to estimating inverse constants associated with a POD-ROM: the POD inverse equations and the finite element inverse equations. We use numerical experiments to compare the inverse estimates from both theories to determine which one may be stronger in theoretical results.
Protein therapeutics have been widely used in the last 80 years as a treatment for various illnesses (e.g. diabetes, cancer, hemophilia, anemia, infectious diseases). Most of the cost of protein production is associated with the downstream separation and purification. Consequently developing more efficient ways of purifying protein could greatly decrease the cost of protein therapeutics. In this talk, we will discuss a method of protein separation using multi-modal porous membranes recently developed in Clemson University's chemical engineering department. We will present numerical simulations of the convection-diffusion equation which can be used to model these membranes. We will also present a brief analysis of the breakthrough curves obtained from the numerical simulations. In addition, we will discuss computational difficulties and inherent instabilities associated with solving the convection-diffusion equation containing an adsorption isotherm.
3D Geometry Reconstruction from OCT Images for Bioresorbable Stent Simulations
The invention of Bioresorbable Vascular Stent(BVS) opened a new era for cardiovascular intervention technology. The self-dissolving and drug-eluting features of BVS are expected to provide long-term benefits for the patients. The strut size of BVS, however, is much thicker than the traditional metallic stent, to compensate the feebleness of the plastics-like dissolvable material. Many suspect that such thickness could cause excessive turbulence in blood flow which leads to high risk of thrombosis and restenosis. But no conclusion can be drawn without seeing the actual blood flow inside the stented vessels of real patients. Therefore, patient-specific blood flow simulations are extremely needed. Current work on such simulations is limited to geometries with no clear appearance of the stent. However, we have developed a process that can produce high quality patient-specific simulations by using realistic stent geometries extracted from patients°Ø OCT images. The shape and pattern of the stent can be fully emerged in the simulations. Several applications of numerical mathematics were implemented in our simulation process: we use a shape-recognition algorithm to spot the stent struts from OCT, a unique curve interpolation to construct the stent pattern, and adaptive meshing to save computational cost. The preliminary results of one patient we have obtained can show the quality of our geometry and the feasibility of our simulation process. Nonetheless our team strives to improve the efficiency of the process by making some aspects of it more automatic and systematic, and we aim to accomplish both quality and quantity.