Navigating Peer Dependency Woes with npm i –legacy-peer-deps

Introduction

When working with Node.js projects and managing dependencies using npm, encountering peer dependency issues is not uncommon. One solution to tackle these problems is the --legacy-peer-deps flag in the npm i (install) command. In this blog post, we will explore what peer dependencies are, why they can cause installation problems, and how the --legacy-peer-deps flag comes to the rescue.

Understanding Peer Dependencies

Peer dependencies are a way for a package to specify that it relies on another package, referred to as a peer dependency, to be present. Unlike regular dependencies, peer dependencies are not installed automatically. Instead, the package expects the consumer to install a compatible version of the peer dependency. This allows for more flexibility in managing dependency versions and helps prevent conflicts between different packages relying on the same dependency.

The Challenge with Peer Dependencies

While peer dependencies offer flexibility, they can also introduce challenges, especially when different packages require different versions of the same peer dependency. By default, npm uses a strict algorithm to resolve peer dependencies, ensuring that the installed versions align perfectly. However, this strictness can lead to installation errors when versions don’t match precisely.

The --legacy-peer-deps Flag

To address these challenges, npm introduced the --legacy-peer-deps flag. This flag signals npm to use an older, more lenient algorithm for resolving peer dependencies. This legacy algorithm allows for greater flexibility in matching versions, potentially resolving installation issues that might occur with the default strict algorithm.

Using the Flag

To use the --legacy-peer-deps flag, simply append it to the npm i command:

npm i --legacy-peer-deps

Cautionary Notes

While the --legacy-peer-deps flag can be a helpful tool, it’s essential to use it cautiously. The more lenient algorithm it employs may lead to the installation of potentially incompatible versions of dependencies, introducing unforeseen issues in your project. Consider it as a last resort and explore alternative solutions before resorting to this flag.

Best Practices for Dealing with Peer Dependencies

  1. Update Dependencies: Check if there are newer versions of the packages causing peer dependency conflicts. Updating to the latest versions might resolve the issue without resorting to the legacy flag.
  2. Contact Package Maintainers: Reach out to the maintainers of the packages facing peer dependency conflicts. They may provide guidance or updates that address compatibility issues.
  3. Manual Dependency Resolution: Manually inspect and adjust the versions of conflicting dependencies in your project. This may involve specifying specific versions or ranges in your package.json file.

Conclusion

The --legacy-peer-deps flag in the npm install command is a useful tool for overcoming peer dependency issues in Node.js projects. However, it should be used with caution due to potential compatibility risks. Understanding peer dependencies, exploring alternative solutions, and following best practices will help you navigate through dependency conflicts more effectively in your Node.js projects.

Mastering Next.js in 10 Days: A Comprehensive Tutorial Series

🚀 Welcome to a 10-day journey where we unravel the power of Next.js, one of the most versatile and efficient React frameworks out there. Whether you’re a beginner or an experienced developer, this tutorial series will guide you through the essentials and beyond. Let’s dive into each day’s topics:

Day 1: Introduction to Next.js

Discover what Next.js is, understand its benefits, and learn the ropes of setting up your first Next.js project. We’ll also take a closer look at its key features.

Day 2: Basic Routing in Next.js

Navigate through the file-based routing system, create pages, and explore dynamic routing with parameters.

Day 3: Styling in Next.js

Delve into the world of styling with CSS-in-JS using styled-components. We’ll cover adding global styles, theming, and integrating with popular CSS frameworks.

Day 4: Data Fetching in Next.js

Learn the ins and outs of data fetching in Next.js, covering getStaticProps, server-side rendering (SSR) with getServerSideProps, and incremental static regeneration (ISR).

Day 5: Working with API Routes

Create powerful API routes within your Next.js app, handle different HTTP methods, and integrate seamlessly with external APIs.

Day 6: Next.js and State Management

Get hands-on with state management in Next.js. We’ll explore using React context for global state and integrating with popular state management libraries.

Day 7: Optimizing Performance in Next.js

Fine-tune your Next.js app for optimal performance. This day covers image optimization with next/image, code splitting strategies, and analyzing and improving overall app performance.

Day 8: Authentication in Next.js

Unpack the world of authentication in Next.js, covering user authentication strategies, integration with authentication providers, and securing routes and resources.

Day 9: Deploying a Next.js App

Prepare your Next.js app for deployment, explore deployment options on popular hosting platforms, and set up continuous deployment for a seamless workflow.

Day 10: Advanced Topics in Next.js

In the final day, we’ll explore advanced topics, including customizing webpack configuration, extending Next.js functionality with plugins, and best practices for advanced usage.

Are you ready to level up your Next.js skills? Follow along each day as we unravel the layers of Next.js, empowering you to build robust and performant web applications. Let the coding adventure begin! 🚀💻 #NextJS #WebDevelopment #TutorialSeries

Note

Just putting title as mastering for seo purposes, generally the post will be in basic level only 😂

Angular New Syntax for Control Flow: A Comparative Overview

Angular has always been a framework that prioritizes developer experience, and its latest release, Angular 17, is no exception. One of the key changes introduced in Angular 17 is a new syntax for control flow in templates. This new syntax is more expressive, efficient, and easier to maintain than the previous syntax.

Old Syntax vs. New Syntax

In Angular 16 and earlier, control flow was primarily handled using directives such as *ngIf, *ngFor, and *ngSwitch. These directives were powerful and flexible, but they could also be verbose and difficult to read.

<div *ngIf="showTable">
  <table>
    </table>
</div>

<div *ngFor="let item of items">
  <p>{{ item }}</p>
</div>

<div *ngSwitch="variable">
  <ng-template #case1>
    <p>Case 1</p>
  </ng-template>
  <ng-template #case2>
    <p>Case 2</p>
  </ng-template>
  <ng-template #default>
    <p>Default</p>
  </ng-template>
</div>

The new syntax for control flow in Angular 17 uses a more declarative approach, relying on keywords like @if, @else, @switch, @case, @default, @for, and @empty. This new syntax is more concise and easier to read, making it a more enjoyable development experience.

<div @if="showTable">
  <table>
    </table>
</div>

<ul @for="let item of items">
  <li>{{ item }}</li>
</ul>

<div @switch="variable">
  <case #case1>
    <p>Case 1</p>
  </case>
  <case #case2>
    <p>Case 2</p>
  </case>
  <default>
    <p>Default</p>
  </default>
</div>

Benefits of the New Syntax

The new syntax for control flow in Angular 17 offers several benefits over the old syntax:

  • Improved readability: The new syntax is more concise and easier to read, making it easier to understand and maintain code.
  • Enhanced expressiveness: The new syntax allows for more expressive control flow constructs, making it easier to write clear and concise code.
  • Easier migration: Angular provides an automatic migration tool to help you seamlessly transition from the old syntax to the new syntax.

Conclusion

The new syntax for control flow in Angular 17 is a significant improvement over the old syntax. It is more expressive, efficient, and easier to maintain. If you are still using the old syntax, I encourage you to migrate to the new syntax as soon as possible. You will find that it is a more enjoyable and productive development experience.

Optimizing Web Performance with Output Caching Middleware in C#

Introduction

In the fast-paced world of web development, optimizing website performance is paramount. Users expect websites to load quickly and responsively. One powerful technique for achieving this goal is output caching. Output caching stores the output of a web page or a portion of it, so it can be reused for subsequent requests, reducing the need for redundant processing. In this blog post, we’ll explore how to implement Output Caching Middleware in C# to enhance the performance of your web applications.

Understanding Output Caching

Output caching involves storing the HTML output generated by a web page or a portion of it, such as a user control or a custom view, in memory. When subsequent requests are made for the same content, the cached output is returned directly, bypassing the need for re-rendering the page or executing the underlying logic. This significantly reduces server load and improves response times.

Implementing Output Caching Middleware in C#

Implementing output caching in C# involves creating custom middleware. Middleware in ASP.NET Core provides a way to handle requests and responses globally as they flow through the pipeline.

Step 1: Create Output Caching Middleware

First, create a class for your middleware. This class should implement IMiddleware interface and handle caching logic.

public class OutputCachingMiddleware : IMiddleware
{
    private readonly MemoryCache _cache;

    public OutputCachingMiddleware()
    {
        _cache = new MemoryCache(new MemoryCacheOptions());
    }

    public async Task InvokeAsync(HttpContext context, RequestDelegate next)
    {
        var cacheKey = context.Request.Path.ToString();

        if (_cache.TryGetValue(cacheKey, out string cachedResponse))
        {
            // If cached response is found, return it
            await context.Response.WriteAsync(cachedResponse);
        }
        else
        {
            // If not cached, proceed to the next middleware and cache the response
            var originalBodyStream = context.Response.Body;
            using (var responseBody = new MemoryStream())
            {
                context.Response.Body = responseBody;

                await next(context);

                responseBody.Seek(0, SeekOrigin.Begin);
                cachedResponse = new StreamReader(responseBody).ReadToEnd();
                _cache.Set(cacheKey, cachedResponse, TimeSpan.FromMinutes(10)); // Cache for 10 minutes
                responseBody.Seek(0, SeekOrigin.Begin);

                await responseBody.CopyToAsync(originalBodyStream);
            }
        }
    }
}

Step 2: Register Middleware in Startup.cs

In your Startup.cs file, add the following code to register your custom middleware in the Configure method.

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    // Other middleware registrations
    
    app.UseMiddleware<OutputCachingMiddleware>();
    
    // More middleware registrations
}

Conclusion

Output caching middleware is a powerful tool in your web development arsenal, significantly improving the performance and responsiveness of your web applications. By implementing this technique, you can reduce server load, decrease response times, and enhance user experience. Remember to carefully consider cache duration and the content you cache to strike a balance between performance and serving up-to-date content to your users. Happy coding!

Getting Started with FaaS: A Quick Introduction

Unlocking the Power of Serverless: Exploring FaaS (Function as a Service)

In the ever-evolving landscape of cloud computing, one term that has gained significant attention in recent years is “Serverless.” While it might sound like a system without servers, that’s not entirely true. Instead, serverless computing shifts the responsibility of server management from the developer to the cloud provider. Among the various serverless offerings, “Function as a Service” (FaaS) is at the forefront. In this blog post, we’ll take an in-depth look at FaaS, its principles, real-time examples, benefits, and challenges.

Understanding the Basics of FaaS

At its core, FaaS is a cloud computing service that allows developers to run individual functions or pieces of code in response to events without managing the underlying infrastructure. In a traditional server-based model, developers are responsible for provisioning, scaling, and maintaining servers to run their applications. In contrast, FaaS abstracts away the server layer, enabling developers to focus solely on writing code.

How FaaS Works

Here’s a simplified overview of how FaaS works:

  1. Event Triggers: FaaS functions are triggered by various events. These events can be HTTP requests, changes in data, scheduled tasks, or custom events generated by other services in your application.
  2. Function Execution: When an event occurs, the corresponding function is executed. The cloud provider automatically provisions the necessary resources to run the function, ensuring it can handle the event’s workload.
  3. Stateless Execution: FaaS functions are designed to be stateless, meaning they don’t retain any information between invocations. This encourages scalability, as functions can be executed in parallel without concerns about shared state.
  4. Scaling: FaaS platforms automatically scale the number of function instances up or down based on the incoming workload. This elasticity ensures optimal resource utilization and cost efficiency.
  5. Billing: With FaaS, you pay only for the execution time and resources consumed during function execution, making it a cost-effective option for many use cases.

Now that we have a basic understanding of FaaS, let’s explore some real-time examples to see how it’s being applied in the world of technology.

Real-World Examples of FaaS

1. Image Processing

Imagine you’re running a photo-sharing platform where users can upload high-resolution images. To ensure a smooth user experience, you need to generate thumbnails, apply filters, and optimize images for various screen sizes. Instead of setting up and managing servers to handle this workload, you can use FaaS. When a user uploads an image, an event triggers a serverless function responsible for image processing. This function generates thumbnails and applies filters, all without the need for manual server management.

2. IoT Data Ingestion

In the Internet of Things (IoT) realm, billions of devices generate vast amounts of data. Processing this data in real-time is a daunting task. FaaS can help by processing IoT data as it arrives. For example, a smart home security system can use FaaS to analyze video feeds from security cameras. When motion is detected, a serverless function can trigger an alert and send notifications to homeowners, all while efficiently utilizing resources based on demand.

3. Chatbots and Voice Assistants

Chatbots and voice assistants like Siri, Alexa, and Google Assistant rely on FaaS to handle user requests. When a user asks a question or issues a command, a serverless function processes the request, retrieves the necessary information, and responds. This architecture allows for rapid scalability to handle fluctuations in user traffic and provides a seamless user experience.

4. Webhooks and API Integrations

Webhooks are a common way for applications to communicate with each other in real-time. For instance, an e-commerce platform can use FaaS to handle incoming webhook events from payment gateways. When a customer makes a purchase, a serverless function can process the payment confirmation, update the order status, and trigger follow-up actions like sending shipping notifications.

5. Data Transformation and ETL (Extract, Transform, Load)

Businesses often need to transform and move data between systems. FaaS is well-suited for such data processing tasks. For instance, a retail company can use serverless functions to extract sales data from an online store, transform it into a standardized format, and load it into a data warehouse for analytics, all in response to data change events.

The Benefits of FaaS

Now that we’ve seen real-world examples of FaaS in action, let’s delve into the benefits that make it an attractive option for modern application development.

1. Cost Efficiency

One of the most significant advantages of FaaS is its cost efficiency. Traditional server-based models require ongoing server maintenance costs, regardless of whether the servers are actively processing requests. With FaaS, you pay only for the actual execution time of your functions. This “pay as you go” model can lead to substantial cost savings, particularly for applications with varying workloads.

2. Scalability and Elasticity

FaaS platforms automatically handle the scaling of resources. When your application experiences a surge in traffic or events, additional function instances are spun up to meet the demand. Conversely, when the load decreases, surplus resources are deprovisioned. This dynamic scaling ensures optimal resource utilization and guarantees that your application can handle fluctuations without manual intervention.

3. Rapid Development and Deployment

FaaS promotes rapid development and deployment cycles. Developers can focus on writing code for specific functions without worrying about server provisioning or configuration management. This agility allows teams to deliver new features and updates faster, reducing time-to-market.

4. Reduced Operational Overhead

Serverless architectures relieve developers of many operational responsibilities. Tasks like server maintenance, OS updates, and security patches are managed by the cloud provider. This reduces the operational overhead and allows development teams to concentrate on writing code and building features.

5. High Availability

Most FaaS providers offer built-in redundancy and high availability. Functions are automatically distributed across multiple data centers and regions. In the event of a failure in one location, traffic is rerouted to healthy instances, ensuring uninterrupted service availability.

6. Event-Driven Architecture

FaaS naturally lends itself to event-driven architectures, which are well-suited for modern applications. By reacting to events such as user actions, data changes, or external triggers, FaaS functions can perform specific tasks efficiently, making applications more responsive and adaptable.

Challenges and Considerations

While FaaS offers numerous benefits, it’s essential to be aware of its limitations and considerations:

1. Cold Start Latency

One challenge with FaaS is the potential for “cold starts.” When a function is triggered, there may be a slight delay as the cloud provider initializes a new execution environment for that function. Cold starts can impact real-time responsiveness, but they can often be mitigated through various strategies, such as using warm-up techniques or optimizing code.

2. Stateless Nature

FaaS functions are designed to be stateless, meaning they don’t retain information between invocations. While this promotes scalability, it can be a limitation for applications that require stateful operations. Managing stateful operations may require additional complexity, such as using external databases or caching systems.

3. Vendor Lock-In

Adopting a FaaS platform often results in vendor lock-in. Each cloud provider has its own proprietary FaaS offering, and migrating functions between providers can be

non-trivial. Organizations should carefully consider this when choosing a FaaS provider.

4. Debugging and Monitoring

Debugging and monitoring distributed, event-driven FaaS applications can be challenging. Tools and practices for tracing and debugging functions across different execution environments are essential to maintain application reliability.

5. Complexity of Function Management

As the number of functions in an application grows, managing and coordinating them can become complex. Proper organization, versioning, and orchestration of functions become crucial aspects of managing a serverless application.

FaaS Providers

Several major cloud providers offer FaaS platforms, each with its own set of features and pricing models. Some of the notable providers include:

  1. AWS Lambda: Amazon Web Services’ FaaS offering provides a wide range of integration options and supports various programming languages, making it one of the most popular choices.
  2. Azure Functions: Microsoft’s FaaS platform offers seamless integration with other Azure services, making it an excellent choice for organizations invested in the Microsoft ecosystem.
  3. Google Cloud Functions: Google’s serverless offering is tightly integrated with Google Cloud services and provides a straightforward development experience.

Conclusion

Function as a Service (FaaS) is revolutionizing the way developers build and deploy applications. With its focus on event-driven, serverless execution, FaaS enables rapid development, cost efficiency, and automatic scalability. Real-world examples illustrate how FaaS can be applied across various domains, from image processing to IoT data ingestion.

While FaaS offers numerous benefits, it’s essential to consider its limitations and challenges, such as cold start latency, statelessness, and vendor lock-in. By carefully evaluating these factors and selecting the right FaaS provider, organizations can harness the power of serverless computing to create more agile, scalable, and cost-effective applications.

As the cloud computing landscape continues to evolve, FaaS remains at the forefront of modern application development, empowering developers to focus on writing code that adds value while leaving the infrastructure management to the experts. Whether you’re a seasoned developer or just starting your journey in the world of serverless computing, exploring FaaS can unlock new possibilities and efficiencies in your projects.

Integrating a YouTube Player Component in Next.js: A Step-by-Step Guide

Introduction

Integrating a YouTube player component into your Next.js application can greatly enhance user engagement by allowing them to view and interact with video content directly on your website. In this step-by-step guide, we will walk you through the process of seamlessly integrating a YouTube player component using the YouTube Iframe API. By the end of this tutorial, you will have a fully functional YouTube player that can be easily customized and controlled within your Next.js application.

Steps

Setting up the Project

  • Begin by creating a new Next.js project or using an existing one.
  • Install the react-youtube package, a convenient wrapper around the YouTube Iframe API, using the command: npm install react-youtube.

Creating the YouTube Player Component

  • Create a new file called YouTubePlayer.js within your Next.js components directory.
  • Import the necessary dependencies:
import React from 'react';
import YouTube from 'react-youtube';

Define the YouTubePlayer component and its required props:

const YouTubePlayer = ({ videoId }) => {
  // Set up event handlers
  const onReady = (event) => {
    // Access the player instance
    const player = event.target;

    // For example, you can automatically play the video
    player.playVideo();
  };

  const onError = (error) => {
    console.error('YouTube Player Error:', error);
  };

  return (
    <YouTube
      videoId={videoId}
      onReady={onReady}
      onError={onError}
    />
  );
};

export default YouTubePlayer;

Implementing the YouTube Player in your Next.js Page:

  • Open the desired Next.js page where you want to integrate the YouTube player.
  • Import the YouTubePlayer component:
import React from 'react';
import YouTubePlayer from '../components/YouTubePlayer';

Include the YouTubePlayer component within your page component’s JSX:

const HomePage = () => {
  return (
    <div>
      <h1>Welcome to My Next.js App!</h1>
      <YouTubePlayer videoId="bmD-tZe8HBA" />
    </div>
  );
};

export default HomePage;

Customization and Further Development:

  • Customize the appearance and behavior of the YouTube player component by modifying the component’s JSX and the associated CSS.
  • Explore the YouTube Iframe API documentation for additional functionality and options that can be integrated into your Next.js application.

Conclusion

By following this comprehensive guide, you have successfully integrated a YouTube player component into your Next.js application. This dynamic addition allows users to view and interact with video content directly on your website, boosting engagement and improving user experience. Feel free to explore further customization options and extend the functionality to suit your specific requirements. With the power of Next.js and the YouTube Iframe API, you can create a captivating and interactive user experience on your website.

GitHub Repo: https://github.com/PandiyanCool/nextjs-youtube-player

Vercel Demo: https://nextjs-youtube-player.vercel.app/

Getting Started: Building Your First Blog with Next.js and Markdown

Introduction

In this tutorial, we will walk through the process of building a blog using Next.js and Markdown. Next.js is a popular React framework that provides server-side rendering, automatic code splitting, and other powerful features. Markdown is a lightweight markup language used for creating formatted text documents. By combining Next.js and Markdown, we can create a fast and dynamic blog with a seamless writing experience. Let’s get started!

Prerequisites: Before we begin, make sure you have the following prerequisites:

  • Basic knowledge of React and Next.js
  • Node.js installed on your machine
  • Familiarity with HTML and CSS

Step 1: Setting Up a Next.js Project To get started, let’s create a new Next.js project. Open your terminal and run the following commands:

npx create-next-app my-blog
cd my-blog

Step 2: Installing Dependencies Next, let’s install the required dependencies for our blog. We need gray-matter to parse the Markdown files, and remark and remark-html to convert Markdown to HTML. Run the following command:

npm install gray-matter remark remark-html

Step 3: Creating Markdown Files In the root of your project, create a posts directory. Inside the posts directory, create a new Markdown file with the following content:

markdownCopy code---
title: My First Blog Post
date: 2023-06-01
---

# Welcome to My Blog!

This is my first blog post. Enjoy!

Step 4: Creating the Blog Page In the pages directory, create a new file called blog.js. In this file, let’s create a component that will fetch the Markdown files and render them as blog posts. Add the following code:

import fs from 'fs';
import path from 'path';
import matter from 'gray-matter';
import { remark } from 'remark';
import html from 'remark-html';

export default function Blog({ posts }) {
  return (
    <div>
      <h1>My Blog</h1>
      {posts.map((post) => (
        <div key={post.slug}>
          <h2>{post.frontmatter.title}</h2>
          <p>{post.frontmatter.date}</p>
          <div dangerouslySetInnerHTML={{ __html: post.content }} />
        </div>
      ))}
    </div>
  );
}

export async function getStaticProps() {
  const postsDirectory = path.join(process.cwd(), 'posts');
  const fileNames = fs.readdirSync(postsDirectory);

  const posts = await Promise.all(
    fileNames.map(async (fileName) => {
      const filePath = path.join(postsDirectory, fileName);
      const fileContent = fs.readFileSync(filePath, 'utf8');
      const { data, content } = matter(fileContent);

      const processedContent = await remark().use(html).process(content);
      const contentHtml = processedContent.toString();

      return {
        slug: fileName.replace(/\.md$/, ''),
        frontmatter: {
          ...data,
          date: data.date.toISOString(), // Convert date to string
        },
        content: contentHtml,
      };
    })
  );

  return {
    props: {
      posts,
    },
  };
}

Step 5: Styling the Blog Page Let’s add some basic styles to make our blog page look better. Create a new CSS file called blog.module.css in the styles directory and add the following code:

.container {
  max-width: 600px;
  margin: 0 auto;
  padding: 20px;
}

.post {
  margin-bottom: 20px;
}

.title {
  font-size: 24px;
  font-weight: bold;
}

.date {
  color: #888;
  font-size: 14px;
}

.content {
  margin-top: 10px;
}

Update the Blog component in blog.js to include the CSS classes:

import styles from '../styles/blog.module.css';

// ...

export default function Blog({ posts }) {
  return (
    <div className={styles.container}>
      <h1>My Blog</h1>
      {posts.map((post) => (
        <div key={post.slug} className={styles.post}>
          <h2 className={styles.title}>{post.frontmatter.title}</h2>
          <p className={styles.date}>{post.frontmatter.date}</p>
          <div
            className={styles.content}
            dangerouslySetInnerHTML={{ __html: post.content }}
          />
        </div>
      ))}
    </div>
  );
}

Step 6: Running the Application Finally, let’s run our Next.js application and see our blog in action. Run the following command:

npm run dev

Visit http://localhost:3000/blog in your browser, and you should see your blog with the first post displayed.

Conclusion

In this tutorial, we learned how to build a blog using Next.js and Markdown. We covered the steps to set up a Next.js project, parse Markdown files, and render them as blog posts. We also added basic styling to enhance the appearance of our blog. With this foundation, you can expand the blog functionality by adding features like pagination, category filtering, and commenting. Happy blogging!

I hope this detailed tutorial helps you build a blog using Next.js and Markdown. Feel free to customize and extend the code to suit your specific needs.

Advanced LINQ Query Techniques in C# with code samples

Introduction

Language-Integrated Query (LINQ) is a set of technologies that allow developers to write queries in a declarative, SQL-like syntax directly in C# or other .NET languages. With LINQ, you can easily query data from different data sources such as arrays, lists, databases, and XML documents. LINQ also provides a rich set of operators for filtering, sorting, grouping, joining, and aggregating data. In this blog post, we will explore some of the advanced LINQ query techniques that you can use in C# to write more efficient and expressive queries.

Prerequisites

To follow along with the examples in this blog post, you should have a basic understanding of LINQ and C#. You should also have Visual Studio installed on your computer.

Grouping Operators

Grouping is a powerful technique that allows you to group elements in a sequence based on a common key. LINQ provides several grouping operators that you can use to group elements in different ways.

GroupBy

The GroupBy operator is used to group elements in a sequence based on a specified key. The key is determined by a lambda expression that selects the key value from each element in the sequence. The GroupBy operator returns a sequence of groups, where each group is represented by a group object that contains a key and a sequence of elements that share the same key.

Here’s an example that demonstrates how to use the GroupBy operator to group a list of products by category:

class Product
{
    public string Name { get; set; }
    public string Category { get; set; }
    public decimal Price { get; set; }
}

List<Product> products = new List<Product>
{
    new Product { Name = "Product A", Category = "Category 1", Price = 10.0M },
    new Product { Name = "Product B", Category = "Category 2", Price = 20.0M },
    new Product { Name = "Product C", Category = "Category 1", Price = 30.0M },
    new Product { Name = "Product D", Category = "Category 2", Price = 40.0M },
    new Product { Name = "Product E", Category = "Category 3", Price = 50.0M }
};

var groups = products.GroupBy(p => p.Category);

foreach (var group in groups)
{
    Console.WriteLine($"Category: {group.Key}");

    foreach (var product in group)
    {
        Console.WriteLine($"Product: {product.Name}, Price: {product.Price}");
    }

    Console.WriteLine();
}

The output of this code is:

Category: Category 1
Product: Product A, Price: 10.0
Product: Product C, Price: 30.0

Category: Category 2
Product: Product B, Price: 20.0
Product: Product D, Price: 40.0

Category: Category 3
Product: Product E, Price: 50.0

As you can see, the products are grouped by category, and each group contains a key and a sequence of products that share the same category.

GroupBy with Projection

You can also use the GroupBy operator with projection to project a sequence of elements into a new form before grouping them. The projection is done using a lambda expression that transforms each element in the sequence into a new form.

Here’s an example that demonstrates how to use the GroupBy operator with projection to group a list of products by the first letter of their name:

// Project products into a new form that contains the first letter of their name and the product itself
var groups = products.GroupBy(p => p.Name[0], p => p);

foreach (var group in groups)
{
Console.WriteLine($"Products that start with '{group.Key}'");
foreach (var product in group)
{
    Console.WriteLine($"Product: {product.Name}, Category: {product.Category}, Price: {product.Price}");
}

Console.WriteLine();
}


The output of this code is:

Products that start with 'P'
Product: Product A, Category: Category 1, Price: 10.0
Product: Product B, Category: Category 2, Price: 20.0
Product: Product C, Category: Category 1, Price: 30.0
Product: Product D, Category: Category 2, Price: 40.0
Product: Product E, Category: Category 3, Price: 50.0

Products that start with 'B'
Product: Book A, Category: Category 1, Price: 15.0
Product: Book B, Category: Category 2, Price: 25.0
Product: Book C, Category: Category 3, Price: 35.0

Products that start with 'C'
Product: Camera A, Category: Category 1, Price: 20.0
Product: Camera B, Category: Category 2, Price: 30.0
Product: Camera C, Category: Category 3, Price: 40.0

As you can see, the products are now grouped by the first letter of their name, and each group contains a key and a sequence of products that start with the same letter.

Join Operators

Joining is a common operation in database queries that allows you to combine data from two or more tables based on a common key. LINQ provides several join operators that you can use to join data from different sources.

Join

The Join operator is used to join two sequences based on a common key. The common key is specified by two lambda expressions, one for each sequence, that extract the key value from each element in the sequence. The Join operator returns a new sequence that contains elements that match the key value in both sequences.

Here’s an example that demonstrates how to use the Join operator to join a list of products with a list of suppliers based on their category: `

class Supplier
{
    public string Name { get; set; }
    public string Category { get; set; }
}

List<Supplier> suppliers = new List<Supplier>
{
    new Supplier { Name = "Supplier A", Category = "Category 1" },
    new Supplier { Name = "Supplier B", Category = "Category 2" },
    new Supplier { Name = "Supplier C", Category = "Category 3" }
};

var query = from product in products
            join supplier in suppliers on product.Category equals supplier.Category
            select new { Product = product.Name, Supplier = supplier.Name };

foreach (var result in query)
{
    Console.WriteLine($"Product: {result.Product}, Supplier: {result.Supplier}");
}

The output of this code is:

Product: Product A, Supplier: Supplier A
Product: Product B, Supplier: Supplier B
Product: Product C, Supplier: Supplier A
Product: Product D, Supplier: Supplier B
Product: Product E, Supplier: Supplier C

As you can see, the products are joined with the suppliers based on their category, and each result contains the product name and the supplier name.

GroupJoin

The GroupJoin operator is similar to the Join operator, but instead of

returning a flat sequence of results, it returns a hierarchical result that groups the matching elements from the second sequence into a sequence of their own.

Here’s an example that demonstrates how to use the GroupJoin operator to group a list of products by their category and include the suppliers that provide products for each category:

var query = from category in categories
            join product in products on category equals product.Category into productsByCategory
            join supplier in suppliers on category equals supplier.Category into suppliersByCategory
            select new
            {
                Category = category,
                Products = productsByCategory,
                Suppliers = suppliersByCategory
            };

foreach (var result in query)
{
    Console.WriteLine($"Category: {result.Category}");

    Console.WriteLine("Products:");

    foreach (var product in result.Products)
    {
        Console.WriteLine($"- {product.Name}");
    }

    Console.WriteLine("Suppliers:");

    foreach (var supplier in result.Suppliers)
    {
        Console.WriteLine($"- {supplier.Name}");
    }

    Console.WriteLine();
}

As you can see, the result is a hierarchical structure that groups the products and suppliers by category.

Set Operators

Set operators are used to perform set operations on sequences, such as union, intersection, and difference. LINQ provides several set operators that you can use to combine and compare sequences.

Union

The Union operator is used to combine two sequences into a single sequence that contains distinct elements from both sequences. The Union operator returns a new sequence that contains elements from both sequences, with duplicates removed.

Here’s an example that demonstrates how to use the Union operator to combine two lists of products into a single list:

var list1 = new List<Product>
{
    new Product { Name = "Product A", Category = "Category 1", Price = 10.0 },
    new Product { Name = "Product B", Category = "Category 2", Price = 20.0 },
    new Product { Name = "Product C", Category = "Category 1", Price = 30.0 },
};

var list2 = new List<Product>
{
    new Product { Name = "Product D", Category = "Category 2", Price = 40.0 },
    new Product { Name = "Product E", Category = "Category 3", Price = 50.0 },
};

var query = list1.Union(list2);

foreach (var product in query)
{
    Console.WriteLine($"Product: {product.Name}, Category: {product.Category}, Price: {product.Price}");
}

The output of this code is:

Product: Product A, Category: Category 1, Price: 10.0
Product: Product B, Category: Category 2, Price: 20.0
Product: Product C, Category: Category 1, Price: 30.0
Product: Product D, Category: Category 2, Price: 40.0
Product: Product E, Category: Category 3, Price: 50.0

As you can see, the Union operator combines the elements from both lists and removes duplicates.

Intersect

The Intersect operator is used to compare two sequences and return a sequence that contains elements that are present in both sequences. The Intersect operator returns a new sequence that contains elements that are common to both sequences, with duplicates removed.

Here’s an example that demonstrates how to use the Intersect operator to compare two lists of products and return a list of products that are present in both lists:

var list1 = new List<Product>
{
    new Product { Name = "Product A", Category = "Category 1", Price = 10.0 },
    new Product { Name = "Product B", Category = "Category 2", Price = 20.0 },
    new Product { Name = "Product C", Category = "Category 1", Price = 30.0 },
};

var list2 = new List<Product>
{
    new Product { Name = "Product C", Category = "Category 1", Price = 30.0 },
    new Product { Name = "Product D", Category = "Category 2", Price = 40.0 },
    new Product { Name = "Product E", Category = "Category 3", Price = 50.0 },
};

var query = list1.Intersect(list2);

foreach (var product in query)
{
    Console.WriteLine($"Product: {product.Name}, Category: {product.Category}, Price: {product.Price}");
}

The output of this code is:

Product: Product C, Category: Category 1, Price: 30.0

As you can see, the Intersect operator compares the elements from both lists and returns only the elements that are present in both lists.

Except

The Except operator is used to compare two sequences and return a sequence that contains elements that are present in the first sequence but not in the second sequence. The Except operator returns a new sequence that contains elements that are unique to the first sequence, with duplicates removed.

Here’s an example that demonstrates how to use the Except operator to compare two lists of products and return a list of products that are present in the first list but not in the second list:

var list1 = new List<Product>
{
    new Product { Name = "Product A", Category = "Category 1", Price = 10.0 },
    new Product { Name = "Product B", Category = "Category 2", Price = 20.0 },
    new Product { Name = "Product C", Category = "Category 1", Price = 30.0 },
};

var list2 = new List<Product>
{
    new Product { Name = "Product C", Category = "Category 1", Price = 30.0 },
    new Product { Name = "Product D", Category = "Category 2", Price = 40.0 },
    new Product { Name = "Product E", Category = "Category 3", Price = 50.0 },
};

var query = list1.Except(list2);

foreach (var product in query)
{
    Console.WriteLine($"Product: {product.Name}, Category: {product.Category}, Price: {product.Price}");
}

The output of this code is:

Product: Product A, Category: Category 1, Price: 10.0
Product: Product B, Category: Category 2, Price: 20.0

As you can see, the Except operator compares the elements from both lists and returns only the elements that are unique to the first list.

Conclusion

LINQ is a powerful feature in C# that allows you to query data from different data sources using a unified syntax. With LINQ, you can write expressive and concise queries that are easy to read and maintain.

In this blog post, we’ve covered some advanced LINQ query techniques, including grouping, set operations, and join operations. We’ve also included code examples to demonstrate how to use these techniques in practice.

By using these advanced LINQ techniques, you can write more complex queries and get more insights from your data. You can also optimize your queries for better performance and reduce the amount of code you need to write.

Remember, LINQ is not just a feature for querying data. It’s a language-integrated query technology that can be used for a variety of purposes, including manipulating data, transforming data, and creating new data structures.

To become proficient in LINQ, you need to understand its core concepts and features, such as query expressions, deferred execution, and lambda expressions. You also need to be familiar with the different LINQ operators and know when to use them.

With the knowledge and skills you’ve gained from this blog post, you can start using LINQ in your projects and take your C# programming skills to the next level.

Happy coding!

Creating a Todo Application using Next.js

Next.js is a framework for building server-rendered React applications. It provides a powerful set of features for web development such as automatic code splitting, server-side rendering, and static site generation. In this blog post, we will be creating a simple Todo application using Next.js.

Setting up the project

To get started, you will need to have Node.js and npm (or yarn) installed on your machine. Once you have these dependencies set up, you can create a new Next.js project using the following command:

npx create-next-app my-todo-app

This will create a new directory called “my-todo-app” with the basic file structure and dependencies for a Next.js app.

Creating the Todo List component

In this step, we will create a TodoList component that will display a list of todo items. Create a new file called TodoList.js in the components folder and add the following code:

import React from 'react';

const TodoList = ({ todos }) => {
  return (
    <ul>
      {todos.map((todo) => (
        <li key={todo.id}>
          <span>{todo.text}</span>
          <button>Delete</button>
        </li>
      ))}
    </ul>
  );
};

export default TodoList;

In this code, we are rendering an unordered list and mapping over the todos prop to create a list item for each todo item. We also added a button to delete the todo item.

Adding the Todo Form

Now that we have the TodoList component, we need to create a form to add new todo items. Create a new file called TodoForm.js in the components folder and add the following code:


import React, { useState } from 'react';

const TodoForm = ({ addTodo }) => {
  const [text, setText] = useState('');

  const handleSubmit = (e) => {
    e.preventDefault();
    if (!text) return;
    addTodo(text);
    setText('');
  };

  return (
    <form onSubmit={handleSubmit}>
      <input
        type="text"
        value={text}
        onChange={(e) => setText(e.target.value)}
        placeholder="Add Todo..."
      />
    </form>
  );
};

export default TodoForm;


In this code, we are creating a form with an input that allows the user to enter a new todo item. When the form is submitted, it calls the addTodo function with the text of the input as an argument. We are also reset the text state after adding the todo item.

Creating the TodoPage

Create a new file called TodoPage.js in the pages folder and add the following code:


import React, { useState } from 'react';
import TodoList from '../components/TodoList';
import TodoForm from '../components/TodoForm';

const TodoPage = () => {
const [todos, setTodos] = useState([]);

const addTodo = (text) => {
setTodos([...todos, { id: todos.length + 1, text }]);
};

const deleteTodo = (id) => {
setTodos(todos.filter((todo) => todo.id !== id));
};

return (
<div>
<TodoForm addTodo={addTodo} />
<TodoList todos={todos} deleteTodo={deleteTodo} />
</div>
);
};

export default TodoPage;


In this file, we are creating a TodoPage component that contains the TodoForm and TodoList components. We are also using React’s useState hook to manage the state of the todo items. The addTodo function is passed down to the TodoForm component as a prop and is called when a new todo item is added. The deleteTodo function is passed down to the TodoList component as a prop and is called when a todo item is deleted.

Adding Routing

Add the following code in your pages/index.js file to redirect users to the TodoPage by default

import TodoPage from './TodoPage';

export default function Home() {
  return <TodoPage />;
}

Now the user will be able to access the TodoPage by visiting the root of your application.

That’s it! You now have a working Todo application built with Next.js. You can customize the application further by adding styles, saving the todo items to a database, or adding more features.

Adding Styles

You can add styles to your Todo application using a CSS preprocessor like SASS or using CSS-in-JS libraries like styled-components.

If you decide to use a CSS preprocessor, you will need to install the necessary dependencies and configure Next.js to use it. You can add the CSS files to the styles directory in the root of your project.

If you prefer to use styled-components, you can install it using npm or yarn by running the following command:

npm install styled-components

And then you can import it in your TodoForm.js and TodoList.js and add styles to your components.

import styled from 'styled-components';

const TodoForm = ({ addTodo }) => {
  // ...
  return (
    <Form onSubmit={handleSubmit}>
      <Input
        type="text"
        value={text}
        onChange={(e) => setText(e.target.value)}
        placeholder="Add Todo..."
      />
    </Form>
  );
};

const Form = styled.form`
  display: flex;
  margin-bottom: 16px;
`;

const Input = styled.input`
  flex: 1;
  padding: 8px;
  border-radius: 4px;
  border: 1px solid #ccc;
`;


Saving Todo items to a database

To save the todo items to a database, you will need to create a backend service that the Next.js app can communicate with. You can use a variety of technologies to build the backend, such as Node.js with Express, Python with Flask or Ruby on Rails.

In your backend service, you will need to create a REST API that the frontend can send requests to for creating, reading, updating, and deleting todo items.

Then you need to call the API in the TodoPage component’s functions like addTodo, deleteTodo to perform the CRUD operations on todo items.

Additionally, you can also use a library like axios or fetch to communicate with the backend service.

In summary, creating a Todo application using Next.js is a straightforward process, but you can also add further functionality like styles, routing, and saving the data to a database. It’s a great way to learn more about building web applications with React and Next.js and you can use the concepts you learn to build more advanced applications in the future.

Twitter sentiment analysis with python library

With the help of twitter app credentials, i have started working on another set of sentiment analysis.

I have used couple of popular libraries this time.

  • TextBlob
  • Tweepy

TextBlob is used to check the polarity of the given text.

And tweepy is used to get the tweets from real-time data.

To use tweepy, first we need to create an app in twitter environment. This can be done in their developers site. Once app is created we need to get the following tokens from it.

api_key = 'XXXX'

api_key_secret = 'XXXX'

access_token = 'XXXX'

access_token_secret = 'XXXX'

You can find the entire main.py file content below.

from textblob import TextBlob
import tweepy

api_key = 'XXXX'
api_key_secret = 'XXXX'
access_token = 'XXXX'
access_token_secret = 'XXXX'

auth_handler = tweepy.OAuthHandler(consumer_key=api_key, consumer_secret=api_key_secret)
auth_handler.set_access_token(access_token, access_token_secret)

api = tweepy.API(auth_handler)

search_terms = 'peace'
tweet_count = 200

tweets = tweepy.Cursor(api.search_tweets, q=search_terms, lang='en').items(tweet_count)

overall_polarity = 0
positive_polarity = 0
neutral_polarity = 0
negative_polarity = 0

for tweet in tweets:
cleaned_up_text = tweet.text.replace('RT', '')
if cleaned_up_text.startswith(' @'):
position = cleaned_up_text.index(':')
cleaned_up_text = cleaned_up_text[position + 2:]
if cleaned_up_text.startswith('@'):
position = cleaned_up_text.index(' ')
cleaned_up_text = cleaned_up_text[position + 2:]

analysis = TextBlob(cleaned_up_text)
overall_polarity += analysis.polarity

if analysis.polarity > 0.00:
positive_polarity += 1
elif analysis.polarity < 0.00:
negative_polarity += 1
print(cleaned_up_text)
elif analysis.polarity == 0.00:
neutral_polarity += 1

print(f'overall: {overall_polarity}')
print(f'positive: {positive_polarity}')
print(f'negative: {negative_polarity}')
print(f'neutral: {neutral_polarity}')

Twitter sentiment analysis using python

I used to do data analysis regularly with different data.

By the way, I did my masters at BITS Pilani on Data Analytics. So I keep doing some analysis on data.

Example: Earlier I have analysed my skype history data. I captured some of fun facts like

  • How many times my name was pronounced wrongly?
  • Who is the leading person to do that?
  • How time good morning message is received in the evenings?
  • How many times received urgent call from contacts?
  • What is average, mean, min, max time people waited for me to reply hi to their hi message? (actually people should not wait for other end person to salute for their message)
  • Likewise, did some more analysis and captured some fun facts.

So now, the ongoing social media topic is “Elon Musk buys twitter”.

Lets do some sentiment analysis with twitter data.

I haven’t invloved in sentiment analysis before. I was referring internet to do those with the help of python libraries.

import re
import tweepy
from tweepy import OAuthHandler
from textblob import TextBlob

class TwitterClient(object):
	'''
	Generic Twitter Class for sentiment analysis.
	'''
	def __init__(self):
		'''
		Class constructor or initialization method.
		'''
		# keys and tokens from the Twitter Dev Console
		consumer_key = 'xxxxxxxxxxxxxxxxxxxxxx'
		consumer_secret = 'xxxxxxxxxxxxxxxxxxxxxx'
		access_token = 'xxxxxxxxxxxxxxxxxxxxxx'
		access_token_secret = 'xxxxxxxxxxxxxxxxxxxxxx

		# attempt authentication
		try:
			# create OAuthHandler object
			self.auth = OAuthHandler(consumer_key, consumer_secret)
			# set access token and secret
			self.auth.set_access_token(access_token, access_token_secret)
			# create tweepy API object to fetch tweets
			self.api = tweepy.API(self.auth)
		except:
			print("Error: Authentication Failed")

	def clean_tweet(self, tweet):
		'''
		Utility function to clean tweet text by removing links, special characters
		using simple regex statements.
		'''
		return ' '.join(re.sub("(@[A-Za-z0-9]+)|([^0-9A-Za-z \t])|(\w+:\/\/\S+)", " ", tweet).split())

	def get_tweet_sentiment(self, tweet):
		'''
		Utility function to classify sentiment of passed tweet
		using textblob's sentiment method
		'''
		# create TextBlob object of passed tweet text
		analysis = TextBlob(self.clean_tweet(tweet))
		# set sentiment
		if analysis.sentiment.polarity > 0:
			return 'positive'
		elif analysis.sentiment.polarity == 0:
			return 'neutral'
		else:
			return 'negative'

	def get_tweets(self, query, count = 10):
		'''
		Main function to fetch tweets and parse them.
		'''
		# empty list to store parsed tweets
		tweets = []

		try:
			# call twitter api to fetch tweets
			fetched_tweets = self.api.search_tweets(q = query, count = count)

			# parsing tweets one by one
			for tweet in fetched_tweets:
				# empty dictionary to store required params of a tweet
				parsed_tweet = {}

				# saving text of tweet
				parsed_tweet['text'] = tweet.text
				# saving sentiment of tweet
				parsed_tweet['sentiment'] = self.get_tweet_sentiment(tweet.text)

				# appending parsed tweet to tweets list
				if tweet.retweet_count > 0:
					# if tweet has retweets, ensure that it is appended only once
					if parsed_tweet not in tweets:
						tweets.append(parsed_tweet)
				else:
					tweets.append(parsed_tweet)

			# return parsed tweets
			return tweets

		except tweepy.TweepError as e:
			# print error (if any)
			print("Error : " + str(e))

def main():
	# creating object of TwitterClient Class
	api = TwitterClient()
	# calling function to get tweets
	tweets = api.get_tweets(query = 'elon musk bought twitter', count = 200)

	# picking positive tweets from tweets
	ptweets = [tweet for tweet in tweets if tweet['sentiment'] == 'positive']
	# percentage of positive tweets
	print("Positive tweets percentage: {} %".format(100*len(ptweets)/len(tweets)))
	# picking negative tweets from tweets
	ntweets = [tweet for tweet in tweets if tweet['sentiment'] == 'negative']
	# percentage of negative tweets
	print("Negative tweets percentage: {} %".format(100*len(ntweets)/len(tweets)))
	# percentage of neutral tweets
	print("Neutral tweets percentage: {} % \
		".format(100*(len(tweets) -(len( ntweets )+len( ptweets)))/len(tweets)))

	# printing first 5 positive tweets
	print("\n\nPositive tweets:")
	for tweet in ptweets[:10]:
		print(tweet['text'])
		print("-------------------")

	# printing first 5 negative tweets
	print("\n\nNegative tweets:")
	for tweet in ntweets[:10]:
		print(tweet['text'])

if __name__ == "__main__":
	# calling main function
	main()

Output

Connected to pydev debugger (build 213.7172.26)
Positive tweets percentage: 42.1875 %
Negative tweets percentage: 12.5 %
Neutral tweets percentage: 45.3125 % 		


Positive tweets:
@realDond_Trump It’s interesting how this new position came right after Elon Musk bought Twitter. Wow! When they co… https://t.co/Nua5PamFnJ
-------------------
RT @BelindaW75: So did all of you come to Twitter only after Elon Musk bought it? I already had an account but didn't really use it til Elo…
-------------------
@rtenews Irish billionaire Denis o brien owns most the Irish media.

Amazon creator Jeff benzos owns washington pos… https://t.co/uUs13EaSrd
-------------------
I and many other people bought into it. But the fact of the matter is, he can post as many silly memes on Twitter a… https://t.co/IBqdu1MDVt
-------------------
RT @OlympTrade: 🦾 Elon Musk bought 9.2% of the Twitter stock. The stock rose by more than 20% immediately after that. 
 
 🤑 Our traders mad…
-------------------
RT @amtvmedia: Elon Musk started PayPal that censors conservative political speech now he bought Twitter because free speech. Hahaha!! Amer…
-------------------
RT @hqfNFT: (5/10) With Twitter getting bought out by Elon Musk, he'd be pushing Twitter's boundaries in many aspects, and I believe one sm…
-------------------
RT @TrungTPhan: Larry Ellison and Elon Musk are buds.

Larry bought $1B of Tesla in December 2018. The stake is worth ~$15B now.

He just p…
-------------------
My hot take: Elon Musk bought twitter soley to delete @ElonJet instead of giving him this 50k
-------------------
RT @Freedom9369: Elon Musk just bought Instagram (after acquiring Twitter and Pepsi, plus owning the new Starlink Satellite System. Humm. W…
-------------------


Negative tweets:
RT @azidan_GOS: Elon Musk bought twitter yesterday, and in less than 24hours he has already changed the colour of the like and Retweet butt…
Arsenal is playing better Barca style than us right now.
Our last games were absolute dog shit, shocking to see the… https://t.co/3k1nLY1v2p
RT @Dream: Elon Musk bought Twitter when it’s literally free 🤣🤣😂😂🤣😝😝 what an idiot
RT @TimRunsHisMouth: It took the Biden administration less than 3 days to create a Ministry of Truth after Elon Musk bought Twitter...  Ima…
elon musk bought twitter and some weird troll kept banging on my youtube channel and not my brand site
@mfer4lyfe Maybe … anything can happen .. Elon Musk bought twitter for 4billy and if he’s been secretly buying into… https://t.co/zAup584dAA
@3nyoom @mfer4lyfe What if … Elon Musk has secretly been buying into Bayc and just bought twitter ; then he goes fo… https://t.co/aEMDzCviZq
RT @_TheShawn: Wait hold up, so you mean to tell me that we can tweet whatever we want now without worrying about a suspension since Elon M…

Process finished with exit code 0

Happy coding!

This is just a experiment I have made. I will keep posted the improvised edition once I complete.

Reference

https://www.geeksforgeeks.org/twitter-sentiment-analysis-using-python/

Technical Debt

Sometimes you might have came across the word Over-Engineering. The good programmer do that intentionally to avoid the technical debt.

Technical debt is a concept in software development that reflects the implied cost of additional rework caused by choosing an easy (limited) solution now instead of using a better approach that would take longer.

Boxing yourself with bad code.

Technical debt keep on raising when you are boxing with the bad code base. The main reason of technical debt is – we are attempting to solve a particular problem in our software with some shortcuts or some quick fix.

Refactoring is the key.

Source: Internet

Boxing with bad code will end up making the code base complex to do further changes or maintenance. Refactoring the code time to time will help to make the code more adaptive.

Quick Fix – No longer works

When we are doing a quick fix to any problem, we should aware that we have to accept technical debt we are making. Doing the similar stuff regularly will end up the code base into a worst place to visit.

At some point your code does not allow you to change anything. And you will break something when you attempt to create something or change something.

The increase in technical debt in the software will make the system more fragile. If you attempt to made any crucial fix which urgently needed, you can’t do that without breaking something. That’s the cost we will be paying as interest to the debt we were creating from the beginning time.

The common mistakes which leads to increase the technical debt

  • Urging into deadline without creating core components
  • Delivering MVP (Minimum Viable Product)
  • Working for Now moment
  • Bringing unaccomplished POC (Proof of Concept) into production

Hard Coding

You started killing your code when you started Hard-Coding.

It might sound crazy, hard-coding will make you to do multiple iteration when you are attempt to change something in the software.

Lets consider, you have a warning message throughout the application as “This request has been submitted successfully”. And you have used it in throughout the application in multiple places.

Suddenly, we need to change all the occurrence of warning message into something else. Lets say – “submitted successfully to server.”

Since it is a simple text, you have hard-coded throughout application. And also you have decided that it is going to straight-forward when it needs a change.

Actually it isn’t. So, what would be approach here. How you will be doing the text change in 100 places in your application?

  • Find and replace?
  • Regex to match text and replace?
  • Or by using some magic refactor tool?

Most of the above best cases would have failed if text is split in two lines or didn’t caught during the search. Earlier, I had used language translate service in my angular app.

The application would support multiple language or variant like US English, UK English, or some other language. So all the text which has been used in the application will be stored in the a particular json file.

If a Warning message has been used in 100 places in the application, this particular variable in the json file would serve as the source for all the messages in the entire application.

If I want to do a text change, I don’t have worry about much in changing as well as testing. Just think of just a text change would have cost this much time, how about touching core functionalities in the code base. So avoid hard-coding whenever possible.

Mi Smart Band 4 (Black)

Adaptive Code

Don’t do temporary solutions which will work only today and save your application from current bug which shows up.

Working for a temporary solution will increase the technical debt in code base.

Technically debt makes your code base easily breakable and you have to work harder to achieve even simple results or changes in the software.

Technical debts brings too much bugs which slows down building new features which dependent of existing code.

Think of all the important cases how the code can change in future. And Refactor the code if it going to accept massive change or new feature on top of it.

Source: Internet

Most of time, you can’t justify the code base needs Refactoring.

Every single time, we will get a question like

It works great, why should touch it? And you can justify your answer all the time.

As a architect or developer, one should think of the all possibilities how the code base might change in future. The one of the possible way to handle it is Adaptive Code.

Don’t Fit yourself inside user story

Think of adaptive code instead of just fulfilling or satisfying the current user story.

If your code doesn’t have any future purpose, never worry about it. Just build a adaptive which help you to improve yourself as well as improve the code base standard.

Earlier, we had discussed about the minimalist approaches which will help to build a quality code.

Building faster is pointless, when it isn’t build right. The art of saying No or YAGNI – “You Aren’t Gonna Need It” is an important skill in the software world. The software should not overflow with the features requested by everyone.

DRY – Instead of duplicating or repeating, just replace with abstraction – function or class

Keep your technical debt down, and deliver the deliverable on time.

We should decide which is emergency, if there is a bug in production which needs attention and a bug fix. That time we need to accept the debt and handle it in future. But we should make sure we should not consider a simple text change or minor new feature as technical debt.

Happy Coding!

To get latest updates you can follow or subscribe! #peace

Code Review best practices

Everyone has there own set of best practices. Here are the few points which I would like to share with you regarding the code review.

The aim of the code review is to build the team spirit, improve the code quality and collaborative learn the best practices.

One feature at a time

Make sure your CRR or commits are based on single feature or story or bug fix. Keeping multiple feature or bug fixes in a single code review request will create more confusion. So keep it simple.

Add members to review

Add everyone from team into in your code review request. At least 2 reviewers should review your code before it has been merged to the remote repository.

Information about what has been changed

Add information about what has been changed in the CRR. Add the related tickets/story/bug link in the CRR (in most of the cases). This will help the peer reviewers to get an insight or information about the task.

Notify the team

Send an Instant message to your team when the CRR request is sent or when the individual completes reviewing a particular request.

If you have any automated system like web hook or slack notification, thats fine. Otherwise, it’s OK to maintain a seperate channel or group to discuss about CRR.

Write it simple and clean

Keep the commit message concise & clear (if it is a bug fix mention it clearly).

When you are reviewing, look into the code and make sure you understand what code does actually; if there is any doubts/clarification needed highlight the code and add comments for clarification.

The aim is to have a readable code, so that remaining team members can also understand.

Be a advisor

If you find the code is difficult to understand or it could be even simpler feel free to suggest the better way to do that.

It’s a good habit to suggest something good instead of just mentioning that particular piece of code can be improved.

Maintain patience

Don’t urge to get your code get reviewed; Give some time to the reviewer and add a gentle reminder if it takes too long.

Be gentle

Stay humble, all these processes are to improve ourselves in a better way.

Code review process is to improve the code quality and build the team spirit in a better way. Collaboratively we can learn more from Code Reviews.

Happy Coding!

Memoization for Optimal Data Fetching in Next.js

Next.js offers a powerful toolkit for building modern web applications. A crucial aspect of Next.js development is efficiently fetching data to keep your application dynamic and user-friendly. Here’s where memoization comes in – a technique that optimizes data fetching by preventing redundant network requests.

What is Memoization?

Memoization is an optimization strategy that caches the results of function calls. When a function is called with the same arguments again, the cached result is returned instead of re-executing the function. In the context of Next.js data fetching, memoization ensures that data fetched for a specific URL and request options is reused throughout your component tree, preventing unnecessary API calls.

Benefits of Memoization:

  • Enhanced Performance: By reusing cached data, memoization significantly reduces network requests, leading to faster page loads and a smoother user experience.
  • Reduced Server Load: Fewer requests to your server free up resources for other tasks, improving overall application scalability.

Understanding Memoization in Next.js Data Fetching:

React, the foundation of Next.js, employs memoization by default for data fetching within components. This applies to:

  • getStaticProps and getServerSideProps: Even though these functions run on the server, the subsequent rendering of the components on the client-side can benefit from memoization.
  • Client-side fetching with fetch or data fetching libraries: Memoization helps prevent redundant calls within the React component tree.

Real-world Example: Product Listing with Pagination

Imagine a Next.js e-commerce app with a product listing page that uses pagination for better navigation. Here’s how memoization can optimize data fetching:

// ProductList.js

import React from 'react';

function ProductList({ products }) {
  return (
    <ul>
      {products.map((product) => (
        <li key={product.id}>{product.name}</li>
      ))}
    </ul>
  );
}

export async function getStaticProps(context) {
  const page = context.params.page || 1; // handle pagination
  const response = await fetch(`https://api.example.com/products?page=${page}`);
  const products = await response.json();

  return {
    props: { products },
    revalidate: 60, // revalidate data every minute (optional)
  };
}

export default ProductList;

In this example, getStaticProps fetches product data for a specific page. Memoization ensures that if a user clicks through pagination links requesting the same page data (e.g., page=2), the data is retrieved from the cache instead of making a new API call.

Additional Considerations:

  • Memoization Limitations: Memoization only applies within the same render pass. If a component unmounts and remounts, the cache won’t be used.
  • Custom Logic for Dynamic Data: If your data fetching relies on factors beyond URL and request options (e.g., user authentication or data in the URL path), you’ll need additional logic to handle cache invalidation or data updates.

Tips for Effective Memoization:

  • Leverage Data Fetching Libraries: Libraries like SWR or React Query provide built-in memoization and caching mechanisms for data fetching, simplifying implementation.
  • Control Caching Behavior: Next.js allows you to control cache headers for specific data requests using the revalidate option in getStaticProps or custom caching logic for client-side fetches.

By effectively using memoization in your Next.js applications, you can optimize data fetching, enhance performance, and provide a more responsive user experience. Remember, a well-crafted caching strategy is essential for building performant and scalable Next.js applications.

2 books 🙌

I’ve completed reading couple of books this year so far.

1. “Let’s Talk Money” by Monika Halan

2. “Deep Work by Cal Newport

“Let’s Talk Money” by Monika Halan

“Let’s Talk Money: You’ve Worked Hard for It, Now Make It Work for You” by Monika Halan serves as a comprehensive and accessible guide to personal finance for the Indian audience. Halan deeply breaks down complex financial concepts into digestible pieces, covering essential topics such as budgeting, saving, investing, insurance, and retirement planning. With a focus on empowering readers to take control of their financial destinies, she provides actionable advice and real-life examples, making the book an useful resource for individuals seeking financial security and prosperity in India’s dynamic economic landscape. Someone new to personal finance can read this book completely, experienced people can skim and read the book and finish it quickly. Overall it’s a good read.

“Deep Work” by Cal Newport

“Deep Work: Rules for Focused Success in a Distracted World” by Cal Newport explores the value of deep, focused work in an age of constant distraction. Newport argues that the ability to concentrate without distraction is becoming increasingly rare and valuable in today’s knowledge economy. I personally use pomodoro technique to do deep work in my day to day activities. Author is famous for his Ted talk on topic to quit the social media and the impact it creates. Even though the book mainly discuss about deep work, i feel the important topic discussed quitting the social media. I would suggest one should regularise their social media usage and maintain proper digital well being and use the benefits of social media wisely.

I have few more books in queue to read later this year 🫶

Happy Reading!