The Consumer Conundrum: Navigating Change in Microservices Without Gridlock

By providing valuable insights and actionable solutions, this article aims to empower you to navigate the complexities of change in your microservices environment and unlock its full potential.

Understanding the Conundrum:

Imagine a bustling city where every traffic light change requires approval from every driver affected. Chaos and gridlock would ensue, mirroring the potential impact of the Consumer Conundrum, a critical anti-pattern in the world of microservices. This pattern emerges when making changes to a service requires seeking approval from every downstream consumer, essentially putting development hostage to individual needs.

The Culprits and Consequences:

Several factors contribute to this conundrum:

  • Tight Coupling: When services are intricately intertwined, modifying one can have cascading effects on others, necessitating individual approvals.
  • Fear of Breaking Changes: The apprehension of introducing disruptions to consumers hinders developers from making bold improvements.
  • Complex Change Management: Lack of well-defined processes and communication channels creates a bureaucratic nightmare for managing change requests.

The consequences of this anti-pattern are far-reaching:

  • Slowed Development: Waiting for approvals cripples agility and responsiveness to market demands.
  • Innovation Stifled: Fear of change hinders the adoption of new features and improvements.
  • Technical Debt: Workarounds and delays accumulate, impacting maintainability and efficiency.
  • Frustration and Silos: Developers and consumers become frustrated, creating communication silos and hindering collaboration.

Breaking Free from the Gridlock:

Conquering the Consumer Conundrum requires a multi-pronged approach:

1. Decouple Tightly Coupled Services: Analyze service dependencies and loosen coupling using techniques like API contracts, event-driven communication, and data pipelines.

2. Embrace Versioning and Deprecation: Implement well-defined versioning schemes (semantic versioning) and clear deprecation policies to manage changes with transparency and predictability.

3. Communication is Key: Establish clear communication channels and forums for consumers to voice concerns and collaborate on updates.

4. Leverage Documentation and Testing: Thorough documentation and comprehensive automated testing provide consumers with confidence and mitigate disruption risks.

5. Gradual Rollouts and Canary Releases: Implement strategies like rolling deployments and canary releases to minimize the impact of changes and gather feedback early.

6. Empower Developers: Foster a culture of trust and responsibility, empowering developers to make well-informed changes with appropriate communication and safeguards.

7. Invest in Monitoring and Feedback: Implement robust monitoring tools to track the impact of changes and gather feedback from consumers to address concerns promptly.

Tools and Technologies:

Several tools and technologies can assist in mitigating the Consumer Conundrum:

  • API Management Platforms: Manage and document service APIs, facilitating communication and change management.
  • Configuration Management Tools: Ensure consistent and secure configuration across all services.
  • Continuous Integration and Delivery (CI/CD) Pipelines: Automate deployments and testing, facilitating faster and safer releases.
  • Monitoring and Alerting Tools: Proactively identify issues and track the impact of changes.

Beyond the Technical:

Ultimately, overcoming the Consumer Conundrum requires a cultural shift:

  • Shifting Focus from “No Breaking Changes” to “Managing Change Effectively”: Instead of clinging to the impossible ideal of never causing disruptions, focus on mitigating and managing the impacts of necessary changes.
  • Building Shared Ownership and Trust: Foster collaboration and shared understanding between developers and consumers, recognizing that change is vital for long-term success.
  • Investing in Communication and Transparency: Open communication and clear documentation are essential for building trust and managing expectations.

Conclusion:

The Consumer Conundrum is a significant challenge in the microservices landscape. By understanding its causes and consequences, employing the right strategies and tools, and fostering a culture of collaboration and communication, you can transform it from a gridlock into a catalyst for innovation and sustained success in your microservices journey.

Microservices: Avoiding the Pitfalls, Embracing the Potential – A Guide to Anti-Patterns

Anti-Patterns

Microservices have transformed the software development environment, offering more agility, scalability, and resilience. However, negotiating this architectural transition is not without obstacles. Falling victim to common anti-patterns can turn your microservices utopia into a tangled web of complexity and aggravation.

Fear not, intrepid developer! This article teaches you how to avoid these mistakes and realise the full potential of microservices. So, put on your anti-pattern-fighting cape and join us on this exploration:

The Anti-Pattern Menagerie:

1. The Break the Piggy Bank Blunder:

Imagine smashing a piggy bank overflowing with coins, representing the tightly coupled functionalities of a monolithic application. In the microservices revolution, this piggy bank is shattered, scattering the coins (code) into individual services. But what if, instead of carefully sorting and organizing, we simply leave them in a chaotic pile? This, my friends, is the essence of the “Break the Piggy Bank Blunder,” an anti-pattern that can shatter your microservices dreams.

Consequences: Tight coupling creates a tangled mess where changes in one service ripple through the entire system, causing instability and hindering deployments. Duplicated code wastes resources and creates inconsistencies, while inefficient deployments slow down development and increase risk.

Solution: Plan meticulously! Identify natural service boundaries based on functionality, ownership, and data access. Extract functionalities gradually, ensuring clear APIs and responsibilities. Think of it as organizing the scattered coins, grouping them by value and denomination for easy management.

2. The Cohesion Chaos Catastrophe:

Picture a circus performer juggling flaming chainsaws, plates spinning precariously on poles, and a live tiger – impressive, yes, but also chaotic and potentially disastrous. This, metaphorically, is the “Cohesion Chaos Catastrophe,” where a single microservice becomes overloaded with diverse functionalities.

Consequences: Maintainability suffers as the service becomes a complex, hard-to-understand monolith. Changes in one area impact seemingly unrelated functionalities, requiring extensive testing. Performance bottlenecks arise due to tight coupling and the sheer volume of tasks handled by the service.

Solution: Enforce strong cohesion! Each service should have a single, well-defined purpose and focus on a specific domain. Think of it as specializing each circus performer – one juggles, another balances plates, and a third tames the tiger. Each act remains impressive while manageable.

3. The Versioning Vacuum:

Imagine losing track of which piggy bank belongs to which child – a versioning nightmare! This lack of strategy in microservices is the “Versioning Vacuum,” leading to compatibility issues and deployment woes.

Consequences: Consumers relying on outdated versions face compatibility breakdowns. Rollbacks and updates become challenging without clear versioning history. Innovation stagnates as developers hesitate to make changes due to potential disruptions.

Solution: Implement a well-defined versioning scheme (e.g., semantic versioning). Think of it as labeling each piggy bank clearly, communicating changes transparently, and simplifying adoption of updates.

4. The Gateway Gridlock:

Imagine navigating a city with tollbooths for every entrance – time-consuming and inefficient. Individual API gateways for each microservice create this very scenario, hindering communication and performance.

Consequences: Unnecessary complexity multiplies as each service manages its own gateway, leading to duplicated logic and overhead. Communication slows down as requests traverse multiple gateways, impacting responsiveness. Development efficiency suffers due to managing and maintaining gateways instead of core functionalities.

Solution: Consider a centralized API gateway, acting as a single entry point for all services. Think of it as a unified tollbooth system for the city, streamlining routing, security, and other concerns, and enhancing efficiency.

5. The Everything Micro Mishap:

Imagine dismantling your entire house brick by brick to rebuild it one miniature brick at a time – an overwhelming and unnecessary task. This “Everything Micro Mishap“ breaks down everything into tiny services, leading to overhead and complexity.

Consequences: Excessive overhead burdens the system with communication complexity and distributed tracing challenges. Maintaining numerous small services becomes resource-intensive. Development slows down due to managing a large number of service boundaries.

Solution: Apply the “Strangler Fig“ pattern. Gradually extract essential functionalities into microservices while leaving smaller, infrequently used components within the monolith. Think of it as strategically removing sections of your house and replacing them with miniature versions while maintaining the core structure for efficiency.

6. The Reach-In Reporting Rampage:

Imagine detectives raiding each other’s offices for evidence instead of a centralized archive. This “Reach-In Reporting Rampage“ occurs when services directly access other service’s databases for reporting, creating tight coupling and hindering independent evolution.

Consequences: Tight coupling between services makes scaling and independent development difficult. Data inconsistencies arise due to direct access, impacting reporting accuracy. Performance bottlenecks occur as services contend for database resources.

Solution: Implement event-driven data pipelines or dedicated data aggregation services. Think of it as creating a central evidence archive accessible to all detectives, promoting loose coupling, independent development, and efficient data access.

7. The Manual Configuration Mayhem:

Imagine managing hundreds of individual remotes for all your devices – tedious and error-prone. This “Manual Configuration Mayhem“ involves manually managing configurations for each microservice, leading to inefficiencies and vulnerabilities.

Consequences: Inconsistent configurations across services create security risks and operational challenges. Manual errors during configuration updates can lead to outages and disruptions. Developers waste time managing individual configurations instead of focusing on core functionalities.

Solution: Leverage a centralized configuration management platform. Think of it as a universal remote controlling all your devices, ensuring consistent, secure, and efficient configuration across all services.

8. The Automation Apathy:

Imagine building your house brick by brick with your bare hands – a slow and laborious process. This “Automation Apathy“ involves neglecting automation in deployment, testing, and monitoring, hindering agility and development speed.

Consequences: Manual deployments are slow and error-prone, delaying releases and increasing risks. Lack of automated testing leads to incomplete coverage and potential bugs slipping through. Manual monitoring fails to catch issues promptly, impacting user experience and service uptime.

Solution: Invest in CI/CD pipelines, automated testing frameworks, and monitoring tools. Think of it as employing robots and advanced tools to build your house efficiently, ensuring fast, reliable deployments, comprehensive testing, and proactive issue detection.

9. The Layering Labyrinth:

Imagine navigating a maze where walls represent technology layers (UI, business logic, data), hindering agility and maintainability. This “Layering Labyrinth“ occurs when services are divided based on technology layers instead of business capabilities.

Consequences: Tight coupling between layers impedes independent development and innovation. Changes in one layer ripple through others, increasing complexity and testing effort. Debugging issues becomes challenging due to layered architecture.

Solution: Focus on business capabilities and domain concepts when creating services. Think of it as building clear pathways within the maze based on business functionalities, promoting loose coupling, flexibility, and easier navigation.

10. The Consumer Conundrum:

Imagine negotiating every traffic light change with all affected drivers – a recipe for gridlock. This “Consumer Conundrum“ occurs when waiting for approval from every service consumer before making changes, stagnating development and innovation.

Solution: Establish well-defined versioning, deprecation policies, and communication channels. Think of it as implementing clear traffic rules and coordinated communication, allowing changes to move forward smoothly while addressing consumer concerns effectively.

Conclusion: Microservices Mastery through Anti-Pattern Avoidance

Microservices are strong tools, but harnessing them needs prudence. By recognising and avoiding these anti-patterns, you can create scalable, manageable, and robust microservices that will take your application to new heights. Remember that microservices are a journey, not a destination. Accept the research, refining, and learning, and you’ll be on your way to creating services that genuinely sparkle. Go out, embrace the microservices adventure, and create something spectacular!

Microservices Data Management Challenges and Patterns

Microservices architecture has gained popularity in recent years as a way to design complex and scalable applications. In this architecture, applications are divided into small, autonomous services that work together to provide the necessary functionality. Each microservice performs a specific task and communicates with other microservices through APIs. Data management is an essential part of microservices, and there are various patterns that can be used to handle data effectively.

Data Management in Microservices

Microservices architecture is characterized by a collection of small, independent services that are loosely coupled and communicate with each other using APIs. Each service is responsible for performing a specific business function and can be developed, deployed, and scaled independently. In a microservices architecture, data is distributed across multiple services, and each service has its database or data store. This distribution of data can present challenges in managing data consistency, data redundancy, data access, and data storage.

Data Management Challenges in Microservices

One of the primary challenges in managing data within a microservices architecture is maintaining data consistency across services. Since each service has its own database, it is crucial to ensure that the data in each database is synchronized and consistent. In addition, managing data across services can be complex since services may use different data models or even different database technologies. Furthermore, as microservices are independently deployable, they may be deployed in different locations, which can further complicate data management.

To address these challenges, various data management patterns have emerged in microservices architecture. These patterns are designed to ensure data consistency, facilitate data exchange between services, and simplify the management of data across services.

Data Consistency

In a microservices architecture, data consistency is a significant challenge because data is distributed across multiple services, and each service has its database. Maintaining consistency across all these databases can be challenging, and it requires careful consideration of data consistency patterns. One common pattern used to ensure data consistency in microservices architecture is the Saga pattern.

The Saga pattern is a distributed transaction pattern that ensures data consistency in a microservices architecture. The pattern is based on the idea of a saga, which is a sequence of local transactions that are executed in each service. Each local transaction updates the local database and publishes a message to the message broker to indicate that the transaction has been completed. The message contains information that is used by other services to determine whether they can proceed with their transactions. If all the local transactions are successful, the saga is considered to be successful. Otherwise, the saga is aborted, and compensating transactions are executed to undo the changes made by the local transactions.

Data Redundancy

Data redundancy is another challenge in data management in a microservices architecture. In a microservices architecture, each service has its database, which can lead to data duplication across multiple services. Data duplication can lead to inconsistencies, increased storage costs, and decreased system performance. One pattern used to address data redundancy in microservices architecture is the event-driven architecture.

In an event-driven architecture, events are used to propagate changes in data across multiple services. When a service updates its database, it publishes an event to the message broker, indicating that the data has changed. Other services that are interested in the data subscribe to the event and update their databases accordingly. This approach ensures that data is consistent across all the services and reduces data redundancy.

Data Access

Data access is another challenge in data management in a microservices architecture. In a microservices architecture, each service has its database, which can make it challenging to access data across services. One pattern used to address this challenge is the API gateway pattern.

In the API gateway pattern, a single entry point is used to access all the services in the system. The API gateway provides a unified interface for accessing data across multiple services. When a client makes a request to the API gateway, the gateway translates the request into calls to the appropriate services and aggregates the responses to provide a unified response to the client. This approach simplifies the client’s interaction with the system and makes it easier to manage data access across services.

Data Storage

Data storage is another challenge in data management in a microservices architecture. In a microservices architecture, each service has its database, which can lead to increased storage costs and decreased system performance. One pattern used to address this challenge is the database per service pattern.

In the database per service pattern, each service has its database, which is used to store data specific to that service. This approach provides isolation between services and reduces the risk of data being read by multiple users.

Data Management Patterns in Microservices

While we have discussed the Data management challenges in microservices, let’s now discuss the Data management patterns that could be used in the Microservices  architecture. 

Database per Service Pattern:

The database per service pattern is a popular approach for managing data in microservices. This approach provides several benefits, such as better scalability and flexibility, as each service can use a database technology that best suits its needs. In this pattern, each microservice has its own database, and the database schema is designed to serve the specific needs of that service. This pattern allows each microservice to manage its data independently without interfering with other microservices. This pattern is suitable for applications that have a large number of microservices and require a high degree of autonomy.

Scalability is improved because each service can be scaled independently, allowing for more granular control over resource allocation. Fault isolation is also improved because if a particular service fails, it will not affect the data stored by other services. Deployment is also simplified because each service can be deployed independently, without impacting other services.

However, there are also some drawbacks to the database per service pattern. For example, it can lead to data duplication and inconsistency if services are not designed properly. Additionally, managing multiple databases can be challenging, and it can be difficult to ensure that all databases are kept up-to-date and synchronized.

One of the biggest challenges is maintaining consistency across multiple databases. When data is distributed across multiple databases, it can be difficult to ensure that all databases are in sync. To address this challenge, some organizations use event-driven architectures or distributed transactions to ensure consistency across multiple databases.

Shared Database Pattern:

In this pattern, all microservices share a common database, and each microservice has access to the data it needs. This pattern simplifies data management by centralizing the data, but it can lead to tight coupling between microservices. Changes made to the database schema can affect multiple microservices, making it difficult to evolve the system over time. So, this is another approach for managing data in microservices is the shared database pattern. In this pattern, multiple services share a common database. This approach simplifies data management, as all data is stored in a single database. It also makes it easier to maintain consistency across services, as all services are using the same database.

Data consistency is improved because all services use the same database, which ensures that data is always up-to-date and consistent across all services. Data duplication is also reduced because all services use the same database, eliminating the need for multiple copies of the same data.

However, there are also some drawbacks to the shared database pattern. For example, scalability can be challenging because all services share the same database, which can create bottlenecks and limit scalability. Fault isolation can also be challenging because if the shared database fails, it will affect all services that rely on it. Additionally, deployment can be more complex because all services need to be updated and tested when changes are made to the shared database schema.

One of the biggest challenges is that it can create tight coupling between services. When multiple services share a database, changes to the database schema or data can impact other services. This can make it difficult to evolve services independently, which is one of the key benefits of microservices.

Saga Pattern:

In the Saga pattern, a sequence of transactions is executed across multiple microservices to ensure consistency. If one transaction fails, the entire sequence is rolled back, and the system returns to its previous state. This pattern is useful for applications that require distributed transactions, but it can be challenging to implement and manage. The saga pattern is a way of managing transactions across multiple services. In a microservices architecture, a single business transaction may span multiple services. The saga pattern provides a way to ensure that all services involved in a transaction either complete successfully or rollback if there is a failure.

In the saga pattern, each service in the transaction is responsible for executing a portion of the transaction and then notifying the coordinator of its completion. The coordinator then decides whether to proceed with the next step in the transaction or to rollback if there is a failure. This approach provides a way to ensure consistency across services and is especially useful when using the database per service pattern.

Event Sourcing Pattern:

In the Event Sourcing pattern, each microservice maintains a log of events that have occurred within its domain. These events are used to reconstruct the state of the system at any point in time. This pattern allows for a high degree of flexibility in data management and provides a reliable audit trail of all changes made to the system. The event sourcing pattern is a way of managing data by storing a sequence of events that describe changes to an application’s state. In a microservices architecture, each service is responsible for handling a specific set of operations. The event sourcing pattern provides a way to store the history of changes to the state of a service.

In the event sourcing pattern, each service maintains a log of events that describe changes to its state. These events can then be used to reconstruct the current state of the service. This approach provides several benefits, such as better auditability and scalability, as events can be processed asynchronously.

CQRS Pattern:

In the CQRS (Command Query Responsibility Segregation) pattern, a microservice is responsible for handling commands (write operations) while another microservice is responsible for handling queries (read operations). This pattern separates the concerns of reading and writing data, which can lead to better scalability and performance. The Command Query Responsibility Segregation (CQRS) pattern is a way of separating the write and read operations in a system. In a microservices architecture, each service is responsible for handling a specific set of operations. The CQRS pattern provides a way to separate the read operations from the write operations.

In the CQRS pattern, each service has two separate models: a write model and a read model. The write model is responsible for handling the write operations, while the read model is responsible for handling the read operations. This approach provides several benefits, such as better scalability and performance, as read and write operations can be optimized separately.

API Composition Pattern:

In the API Composition pattern, multiple microservices are combined to provide a unified API to the client. This pattern allows for a high degree of flexibility in data management, as each microservice can manage its data independently. However, it can be challenging to manage the dependencies between microservices. The API gateway pattern is a data management pattern that is often used in conjunction with the database per service pattern. In this pattern, a single API gateway is used to provide a unified interface for all services, allowing clients to access data from multiple services through a single API.

This pattern provides several benefits, such as improved security, simplified client development, and improved performance. Security is improved because the API gateway can be used to enforce authentication and authorization rules, ensuring that only authorized clients can access data from the various services. Simplified client development is achieved because clients only need to interact with a single API, rather than multiple APIs for each service. Improved performance is achieved because the API gateway can be used to cache frequently accessed data, reducing the number of requests that need to be made to the various services.

However, there are also some drawbacks to the API gateway pattern. For example, it can introduce a single point of failure, as all requests must pass through the API gateway. Additionally, the API gateway can become a performance bottleneck if it is not designed and implemented properly.

Conclusion

In conclusion, data management is a crucial aspect of microservices architecture, and there are various patterns that can be used to handle data effectively. Each pattern has its strengths and weaknesses, and the choice of pattern depends on the specific requirements of the application. By using these patterns, developers can build scalable and reliable microservices that can handle large volumes of data.

Microservices Deployment Patterns

Microservices Deployment Patterns

Introduction

Microservices are the trend of the hour. Businesses are moving towards cloud-native architecture and breaking their large applications into smaller, self-independent modules called microservices. This architecture gives a lot more flexibility, maintainability, and operability, not to mention better customer satisfaction.

With these added advantages, architects and operations engineers face many new challenges as well. Earlier, they were managing one application; now they have to manage many instead. Each application again needs its own support services, like databases, LDAP servers, messaging queues, and so on. So the stakeholders need to think through different strategies for deployment where the entire application can be well deployed while maintaining its integrity and providing optimal performance.

Deployment Patterns

The microservices architects suggest different types of patterns that could be used to deploy microservices. Each design provides solutions for diverse functional and non-functional requirements. 

So microservices could be written in a variety of programming languages or frameworks. Again, they could be written in different versions of the same programming language or framework. Each microservice comprises several different service instances, like the UI, DB, and backend. The microservice must be independently deployable and scalable. The service instances must be isolated from each other. The service must be able to quickly build and deploy itself. The service must be allocated proper computing resources. The deployment environment must be reliable, and the service must be monitored. 

Multiple service instances per host

To meet the requirements mentioned at the start of this section, we can think of a solution with which we can deploy service instances of multiple services on one host. The host may be physical or virtual. So, we are running many service instances from different services on a shared host. 

There are different ways we could do it. We can start each instance as a JVM process. We can also start multiple instances as part of the same JVM process, kind of like a web application. We can also use scripts to automate the start-up and shutdown processes with some configurations. The configuration will have different deployment-related information, like version numbers.

With this kind of approach, the resources could be used very efficiently. 

Service instance per host

In many cases, microservices need their own space and a clearly separated deployment environment. In such cases, they can’t share the deployment environment with other services or service instances. There may be a chance of resource conflict or scarcity. There might be issues when services written in the same language or framework but with different versions can’t be co-located. 

In such cases, a service instance could be deployed on its own host. The host could either be a physical or virtual machine. 

In such cases, there wouldn’t be any conflict with other services. The service remains entirely isolated. All the resources of the VM are available for consumption by the service. It can be easily monitored.

The only issue with this deployment pattern is that it consumes more resources.

Service instance per VM

In many cases, microservices need their own, self-contained deployment environment. The microservice must be robust and must start and stop quickly. Again, it also needs quick upscaling and downscaling. It can’t share any resources with any other service. It can’t afford to have conflicts with other services. It needs more resources, and the resources must be properly allocated to the service.

In such cases, the service could be built as a VM image and deployed in a VM. 

Scaling could be done quickly, as new VMs could be started within seconds. All VMs have their own computing resources that are properly allocated according to the needs of the microservice. There is no chance of any conflict with any other service. Each VM is properly isolated and can get support for load balancing. 

Service instance per Container

In some cases, microservices are very tiny. They consume very less resources for their execution. However, they need to be isolated. There must not be any resource sharing. They again can’t afford to be collocated and have a chance of conflict with another service. It needs to be deployed quickly if there is a new release. There might be a need to deploy the same service but with different release versions. The service must be capable of scaling rapidly. It also must have the capacity to start and shut down in a few milliseconds. 

In such a case, the service could be built as a container image and deployed as a container. 

In that case, the service will remain isolated. There would not be any chance of conflict. Computing resources could be allocated as per the calculated need of the service. The service could be scaled rapidly. Containers could also be started and shut down quickly.

Serverless deployment

In certain cases, the microservice might not need to know the underlying deployment infrastructure. In these situations, the deployment service is contracted out to a third-party vendor, who is typically a cloud service provider. The business is absolutely indifferent about the underlying resources; all it wants to do is run the microservice on a platform. It pays the service provider based on the resources consumed from the platform for each service call. The service provider picks the code and executes it for each request. The execution may happen in any executing sandbox, like a container, VM, or whatever. It is simply hidden from the service itself. 

The service provider takes care of provisioning, scaling, load-balancing, patching, and securing the underlying infrastructure. Many popular examples of serverless offerings are AWS Lambda, Google Functions, etc. 

The infrastructure of a serverless deployment platform is very elastic. The platform scales the service to absorb the load automatically. The time spent managing the low-level infrastructure is eliminated. The expenses are also lowered as the microservices provider pays only for the resources consumed for each call.

Service deployment platform

Microservices can also be deployed on application deployment platforms. By providing some high-level services, such platforms clearly abstract out the deployment. The service abstraction can handle non-functional and functional requirements like availability, load balancing, monitoring, and observability for the service instances. The application deployment platform is thoroughly automated. It makes the application deployment quite reliable, fast, and efficient. 

Examples of such platforms are Docker Swarm, Kubernetes, and Cloud Foundry, which is a PaaS offering. 

Conclusion

Microservices deployment options and offerings are constantly evolving. It’s possible that a few more deployment patterns will follow suit. Many of these patterns mentioned above are very popular and are being used by most microservice providers. They are very successful and reliable. But with changing paradigms, administrators are thinking of innovative solutions.