• Kinza Babylon Staked BTCKinza Babylon Staked BTC(KBTC)$83,270.000.00%
  • Steakhouse EURCV Morpho VaultSteakhouse EURCV Morpho Vault(STEAKEURCV)$0.000000-100.00%
  • Stride Staked InjectiveStride Staked Injective(STINJ)$16.51-4.18%
  • Vested XORVested XOR(VXOR)$3,404.231,000.00%
  • FibSwap DEXFibSwap DEX(FIBO)$0.0084659.90%
  • ICPanda DAOICPanda DAO(PANDA)$0.003106-39.39%
  • TruFin Staked APTTruFin Staked APT(TRUAPT)$8.020.00%
  • bitcoinBitcoin(BTC)$105,792.000.78%
  • ethereumEthereum(ETH)$2,614.492.74%
  • VNST StablecoinVNST Stablecoin(VNST)$0.0000400.67%
  • tetherTether(USDT)$1.000.01%
  • rippleXRP(XRP)$2.263.71%
  • binancecoinBNB(BNB)$661.63-0.05%
  • Wrapped SOLWrapped SOL(SOL)$143.66-2.32%
  • solanaSolana(SOL)$157.802.04%
  • usd-coinUSDC(USDC)$1.000.00%
  • dogecoinDogecoin(DOGE)$0.1944740.70%
  • tronTRON(TRX)$0.2714671.13%
  • cardanoCardano(ADA)$0.68-0.02%
  • staked-etherLido Staked Ether(STETH)$2,611.882.81%
  • wrapped-bitcoinWrapped Bitcoin(WBTC)$105,853.000.85%
  • Gaj FinanceGaj Finance(GAJ)$0.0059271.46%
  • HyperliquidHyperliquid(HYPE)$36.792.61%
  • Content BitcoinContent Bitcoin(CTB)$24.482.55%
  • USD OneUSD One(USD1)$1.000.11%
  • Wrapped stETHWrapped stETH(WSTETH)$3,148.743.37%
  • SuiSui(SUI)$3.28-2.06%
  • UGOLD Inc.UGOLD Inc.(UGOLD)$3,042.460.08%
  • ParkcoinParkcoin(KPK)$1.101.76%
  • chainlinkChainlink(LINK)$14.192.63%
  • avalanche-2Avalanche(AVAX)$21.242.38%
  • stellarStellar(XLM)$0.2735011.75%
  • leo-tokenLEO Token(LEO)$8.834.15%
  • bitcoin-cashBitcoin Cash(BCH)$403.610.83%
  • ToncoinToncoin(TON)$3.19-1.10%
  • shiba-inuShiba Inu(SHIB)$0.0000130.38%
  • hedera-hashgraphHedera(HBAR)$0.1729211.31%
  • USDSUSDS(USDS)$1.000.00%
  • wethWETH(WETH)$2,618.632.88%
  • Yay StakeStone EtherYay StakeStone Ether(YAYSTONE)$2,671.07-2.84%
  • litecoinLitecoin(LTC)$89.911.04%
  • Wrapped eETHWrapped eETH(WEETH)$2,793.282.61%
  • moneroMonero(XMR)$345.25-3.38%
  • polkadotPolkadot(DOT)$4.151.66%
  • Pundi AIFXPundi AIFX(PUNDIAI)$16.000.00%
  • Binance Bridged USDT (BNB Smart Chain)Binance Bridged USDT (BNB Smart Chain)(BSC-USD)$1.000.04%
  • PengPeng(PENG)$0.60-13.59%
  • Ethena USDeEthena USDe(USDE)$1.000.01%
  • Bitget TokenBitget Token(BGB)$4.791.53%
  • PepePepe(PEPE)$0.0000122.75%
TradePoint.io
  • Main
  • AI & Technology
  • Stock Charts
  • Market & News
  • Business
  • Finance Tips
  • Trade Tube
  • Blog
  • Shop
No Result
View All Result
TradePoint.io
No Result
View All Result

A Step by Step Guide to Solve 1D Burgers’ Equation with Physics-Informed Neural Networks (PINNs): A PyTorch Approach Using Automatic Differentiation and Collocation Methods

March 29, 2025
in AI & Technology
Reading Time: 7 mins read
A A
A Step by Step Guide to Solve 1D Burgers’ Equation with Physics-Informed Neural Networks (PINNs): A PyTorch Approach Using Automatic Differentiation and Collocation Methods
ShareShareShareShareShare

In this tutorial, we explore an innovative approach that blends deep learning with physical laws by leveraging Physics-Informed Neural Networks (PINNs) to solve the one-dimensional Burgers’ equation. Using PyTorch on Google Colab, we demonstrate how to encode the governing differential equation directly into the neural network’s loss function, allowing the model to learn the solution 𝑢(𝑥,𝑡) that inherently respects the underlying physics. This technique reduces the reliance on large labeled datasets and offers a fresh perspective on solving complex, non-linear partial differential equations using modern computational tools.

!pip install torch matplotlib

First, we install the PyTorch and matplotlib libraries using pip, ensuring you have the necessary tools for building neural networks and visualizing the results in your Google Colab environment.

YOU MAY ALSO LIKE

New Study Uses Attachment Theory to Decode Human-AI Relationships

Reddit will let you hide posts, comments and NSFW activity from your public profile

import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
import matplotlib.pyplot as plt


torch.set_default_dtype(torch.float32)

We import essential libraries: PyTorch for deep learning, NumPy for numerical operations, and matplotlib for plotting. We set the default tensor data type to float32 for consistent numerical precision throughout your computations.

x_min, x_max = -1.0, 1.0
t_min, t_max = 0.0, 1.0
nu = 0.01 / np.pi


N_f = 10000  
N_0 = 200    
N_b = 200    


X_f = np.random.rand(N_f, 2)
X_f[:, 0] = X_f[:, 0] * (x_max - x_min) + x_min  # x in [-1, 1]
X_f[:, 1] = X_f[:, 1] * (t_max - t_min) + t_min    # t in [0, 1]


x0 = np.linspace(x_min, x_max, N_0)[:, None]
t0 = np.zeros_like(x0)
u0 = -np.sin(np.pi * x0)


tb = np.linspace(t_min, t_max, N_b)[:, None]
xb_left = np.ones_like(tb) * x_min
xb_right = np.ones_like(tb) * x_max
ub_left = np.zeros_like(tb)
ub_right = np.zeros_like(tb)


X_f = torch.tensor(X_f, dtype=torch.float32, requires_grad=True)
x0 = torch.tensor(x0, dtype=torch.float32)
t0 = torch.tensor(t0, dtype=torch.float32)
u0 = torch.tensor(u0, dtype=torch.float32)
tb = torch.tensor(tb, dtype=torch.float32)
xb_left = torch.tensor(xb_left, dtype=torch.float32)
xb_right = torch.tensor(xb_right, dtype=torch.float32)
ub_left = torch.tensor(ub_left, dtype=torch.float32)
ub_right = torch.tensor(ub_right, dtype=torch.float32)

We establish the simulation domain for the Burgers’ equation by defining spatial and temporal boundaries, viscosity, and the number of collocation, initial, and boundary points. It then generates random and evenly spaced data points for these conditions and converts them into PyTorch tensors, enabling gradient computation where needed.

class PINN(nn.Module):
    def __init__(self, layers):
        super(PINN, self).__init__()
        self.activation = nn.Tanh()
       
        layer_list = []
        for i in range(len(layers) - 1):
            layer_list.append(nn.Linear(layers[i], layers[i+1]))
        self.layers = nn.ModuleList(layer_list)
       
    def forward(self, x):
        for i, layer in enumerate(self.layers[:-1]):
            x = self.activation(layer(x))
        return self.layers[-1](x)


layers = [2, 50, 50, 50, 50, 1]
model = PINN(layers)
print(model)

Here, we define a custom Physics-Informed Neural Network (PINN) by extending PyTorch’s nn.Module. The network architecture is built dynamically using a list of layer sizes, where each linear layer is followed by a Tanh activation (except for the final output layer). In this example, the network takes a 2-dimensional input, passes it through four hidden layers (each with 50 neurons), and outputs a single value. Finally, the model is instantiated with the specified architecture, and its structure is printed.

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)

Here, we check if a CUDA-enabled GPU is available, set the device accordingly, and move the model to that device for accelerated computation during training and inference.

def pde_residual(model, X):
    x = X[:, 0:1]
    t = X[:, 1:2]
    u = model(torch.cat([x, t], dim=1))
   
    u_x = torch.autograd.grad(u, x, grad_outputs=torch.ones_like(u), create_graph=True, retain_graph=True)[0]
    u_t = torch.autograd.grad(u, t, grad_outputs=torch.ones_like(u), create_graph=True, retain_graph=True)[0]
    u_xx = torch.autograd.grad(u_x, x, grad_outputs=torch.ones_like(u_x), create_graph=True, retain_graph=True)[0]
   
    f = u_t + u * u_x - nu * u_xx
    return f


def loss_func(model):
    f_pred = pde_residual(model, X_f.to(device))
    loss_f = torch.mean(f_pred**2)
   
    u0_pred = model(torch.cat([x0.to(device), t0.to(device)], dim=1))
    loss_0 = torch.mean((u0_pred - u0.to(device))**2)
   
    u_left_pred = model(torch.cat([xb_left.to(device), tb.to(device)], dim=1))
    u_right_pred = model(torch.cat([xb_right.to(device), tb.to(device)], dim=1))
    loss_b = torch.mean(u_left_pred**2) + torch.mean(u_right_pred**2)
   
    loss = loss_f + loss_0 + loss_b
    return loss

Now, we compute the residual of Burgers’ equation at the collocation points by calculating the required derivatives via automatic differentiation. Then, we define a loss function that aggregates the PDE residual loss, the error from the initial condition, and the errors from the boundary conditions. This combined loss guides the network to learn a solution that satisfies both the physical law and the imposed conditions.

optimizer = optim.Adam(model.parameters(), lr=1e-3)
num_epochs = 5000


for epoch in range(num_epochs):
    optimizer.zero_grad()
    loss = loss_func(model)
    loss.backward()
    optimizer.step()
   
    if (epoch+1) % 500 == 0:
        print(f'Epoch {epoch+1}/{num_epochs}, Loss: {loss.item():.5e}')
       
print("Training complete!")

Here, we set up the PINN’s training loop using the Adam optimizer with a learning rate of 1×10−3. Over 5000 epochs, it repeatedly computes the loss (which includes the PDE residual, initial, and boundary condition errors), backpropagates the gradients, and updates the model parameters. Every 500 epochs, it prints the current epoch and loss to monitor progress and finally announces when training is complete.

N_x, N_t = 256, 100
x = np.linspace(x_min, x_max, N_x)
t = np.linspace(t_min, t_max, N_t)
X, T = np.meshgrid(x, t)
XT = np.hstack((X.flatten()[:, None], T.flatten()[:, None]))
XT_tensor = torch.tensor(XT, dtype=torch.float32).to(device)


model.eval()
with torch.no_grad():
    u_pred = model(XT_tensor).cpu().numpy().reshape(N_t, N_x)


plt.figure(figsize=(8, 5))
plt.contourf(X, T, u_pred, levels=100, cmap='viridis')
plt.colorbar(label="u(x,t)")
plt.xlabel('x')
plt.ylabel('t')
plt.title("Predicted solution u(x,t) via PINN")
plt.show()

Finally, we create a grid of points over the defined spatial (𝑥) and temporal (𝑡) domain, feed these points to the trained model to predict the solution 𝑢(𝑥, 𝑡), and reshape the output into a 2D array. Also, it visualizes the predicted solution as a contour plot using matplotlib, complete with a colorbar, axis labels, and a title, allowing you to observe how the PINN has approximated the dynamics of the Burgers’ equation.

In conclusion, this tutorial has showcased how PINNs can be effectively implemented to solve the 1D Burgers’ equation by incorporating the physics of the problem into the training process. Through careful construction of the neural network, generation of collocation and boundary data, and automatic differentiation, we achieved a model that learns a solution consistent with the PDE and the prescribed conditions. This fusion of machine learning and traditional physics paves the way for tackling more challenging problems in computational science and engineering, inviting further exploration into higher-dimensional systems and more sophisticated neural architectures.


Here is the Colab Notebook. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 85k+ ML SubReddit.


Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.

Credit: Source link

ShareTweetSendSharePin

Related Posts

New Study Uses Attachment Theory to Decode Human-AI Relationships
AI & Technology

New Study Uses Attachment Theory to Decode Human-AI Relationships

June 3, 2025
Reddit will let you hide posts, comments and NSFW activity from your public profile
AI & Technology

Reddit will let you hide posts, comments and NSFW activity from your public profile

June 3, 2025
Artem Sokolov, Founder of Humanoid – Interview Series
AI & Technology

Artem Sokolov, Founder of Humanoid – Interview Series

June 3, 2025
Snowflake’s Openflow tackles AI’s toughest engineering challenge: Data ingestion at scale
AI & Technology

Snowflake’s Openflow tackles AI’s toughest engineering challenge: Data ingestion at scale

June 3, 2025
Next Post
888-825-5225 – Call Us!

888-825-5225 – Call Us!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Search

No Result
View All Result
CampFire Studio will launch Soulmask DLC on June 5

CampFire Studio will launch Soulmask DLC on June 5

May 30, 2025
Evercore: Stay The Course

Evercore: Stay The Course

June 1, 2025
What’s the big minerals deal between the U.S. and Ukraine?

What’s the big minerals deal between the U.S. and Ukraine?

May 31, 2025

About

Learn more

Our Services

Legal

Privacy Policy

Terms of Use

Bloggers

Learn more

Article Links

Contact

Advertise

Ask us anything

©2020- TradePoint.io - All rights reserved!

Tradepoint.io, being just a publishing and technology platform, is not a registered broker-dealer or investment adviser. So we do not provide investment advice. Rather, brokerage services are provided to clients of Tradepoint.io by independent SEC-registered broker-dealers and members of FINRA/SIPC. Every form of investing carries some risk and past performance is not a guarantee of future results. “Tradepoint.io“, “Instant Investing” and “My Trading Tools” are registered trademarks of Apperbuild, LLC.

This website is operated by Apperbuild, LLC. We have no link to any brokerage firm and we do not provide investment advice. Every information and resource we provide is solely for the education of our readers. © 2020 Apperbuild, LLC. All rights reserved.

No Result
View All Result
  • Main
  • AI & Technology
  • Stock Charts
  • Market & News
  • Business
  • Finance Tips
  • Trade Tube
  • Blog
  • Shop

© 2023 - TradePoint.io - All Rights Reserved!