LATITUDE 40
  • Home
  • About Us
    • About
    • Our Process
    • Quality Please
    • What Is Custom Software?
  • Services
  • Case Studies
  • Buzz
    • Blog
    • Testimonials
  • Contact

Latitude 40 blog

From Keywords to Meaning: How AI Powers Semantic Searches

10/2/2025

0 Comments

 

​Understanding Semantic Searches

Traditional keyword search is like looking for a needle in a haystack—if you don’t use the exact word, you might miss the match. Semantic search changes that by focusing on meaning rather than literal words.

Instead of searching through a database of book reviews for "Romance" or "Suspense", imagine typing:
  • "Get the heart racing" and finding reviews like:
    • "I couldn't put it down."
    • "It kept me up all night."
Or searching for:
  • "Excites the heart" and discovering:
    • "A passionate love story that lingers long after the last page."
      ​
Semantic search understands the search intent and emotion behind your query, not just the vocabulary. To make semantic searches possible, AI uses a concept called vector embeddings

​What are Vector Embeddings?

AI models convert text into vector embeddings—arrays of numbers that represent meaning in a multi-dimensional space.
  • Each word or phrase becomes a token.
  • These tokens are plotted in a space with hundreds or thousands of dimensions.
  • Words with similar meanings are placed closer together.
    ​
Think of it like a galaxy of ideas, where “thrilling” and “exciting” orbit near each other, while “boring” floats far away.

​A Cosine Similarity: Measuring Meaning

Once text is embedded into vectors, we need a way to compare them. That’s where cosine similarity comes in.
  • It measures the angle between two vectors.
  • A score near 1 means the vectors (and thus the meanings) are very similar.
    ​
Cosine similarity is the engine behind semantic matching.

Claris ​FileMaker’s New Semantic Search Features

Claris FileMaker Developers now have AI script steps that make semantic search easy to implement.
  • You can embed text, store vectors, and compare them—all within FileMaker and an AI Language Model from a company such as Cohere.
  • This enables smarter search experiences, like:
    • Matching user queries to emotionally resonant reviews.
    • Finding relevant content even when keywords don’t match for better workflow optimization.

​About Latitude 40

Latitude 40 integrates experienced on-shore software development professionals into your organization, forming collaborative teams with or without your existing developers. Together, we identify needs, create tailored software solutions, and instill best practices that drive continuous improvement and ensure agility.

Contact Latitude 40 to learn how we can help implement AI into your Claris projects.​

About the Author

​Dan DeLeeuw is the Chief Operating Officer at Latitude 40 Consulting and a Certified FileMaker Developer. He consistently maintains the latest FileMaker certifications, reflecting his commitment to staying at the forefront of the platform. Dan is a strong advocate for clean, maintainable, and well-documented code, believing that clarity is key to scalable and sustainable development.
0 Comments

Clean Code: Mindset Over Dogma

9/18/2025

0 Comments

 
Developer is sitting at her desk with multiple monitors thinking about the design of a software system she's helping write.
Lately, I've seen a wave of criticism and developer backlash against clean code. They argue it's too rigid, too idealistic, and often counterproductive. They point to over-engineered systems filled with tiny methods, excessive abstraction, and unnecessary complexity, all in the name of "clean code."

But this criticism often confuses the mindset with its misapplication. Clean code isn't a checklist or a burden. It's a way of thinking about how your code will be understood, maintained, and extended by others.

The Mindset vs. the Mechanics

Clean code is not a set of rules to follow blindly. It's a mindset with a commitment to clarity, maintainability, and empathy for the next developer. The techniques commonly associated with clean code (like meaningful naming, small functions, and separation of concerns) are tools that support this mindset. They're not mandates.

When developers apply these techniques without understanding the why, they risk creating code that's technically "clean" but practically unreadable. The mindset encourages thoughtful application, not dogmatic adherence.

Why It Matters in the Enterprise

In enterprise environments, code lives longer, changes hands more often, and supports critical systems. Maintainability is a necessity. Clean code helps teams avoid technical debt, reduce onboarding time, and improve collaboration.

Even if you don't apply every principle, understanding the mindset helps you make better decisions. It's not about perfection. It's about writing code that others can work with confidently.

Clean Code Is Empathy in Action

At its core, clean code is about empathy. It's about writing software that respects the time and effort of others. It's about leaving behind code that's easy to read, change, and build upon.

This mindset fosters trust, reduces bugs, and makes teams more resilient. It's not just about how the code looks, it's about how it feels to work with.

Conclusion: Don't Reject the Mindset

Let's be clear: you absolutely shouldn't reject the clean code mindset. If you choose to skip the education and experience that comes from practicing clean code, you're not just avoiding a style. You’re limiting your growth as a developer.

Clean code principles teach you how to think about architecture, readability, and long-term maintainability. These are the skills that separate good developers from great ones. They're far more important than knowing the latest language features or syntax tricks.

Languages change. Frameworks evolve. But the ability to write code that others can understand, maintain, and extend is timeless.

About Latitude 40

Latitude 40 helps businesses achieve operational excellence and long-term business agility through tailored software solutions and expert guidance. By embedding into client teams, Latitude 40 delivers elegant, maintainable software while teaching Agile practices that foster sustainable growth. Latitude 40 builds with clarity, purpose, and a deep respect for the people who maintain and evolve code.

Need help cleaning up a messy codebase or modernizing your systems? Let’s talk.

About the Author

Andrew Anderson is President of Latitude 40 Consulting and a seasoned software architect with over two decades of experience in Agile delivery. He's worked globally as a developer, analyst, and instructor, and is passionate about writing maintainable code and helping teams grow through clean architecture and practical Agile practices. Andrew shares insights from the field to help developers and leaders build better software, and better teams.
View my profile on LinkedIn
0 Comments

Clean Code: Write Code That Explains Itself

8/10/2025

0 Comments

 
Software developer writing clean code via their laptop

Introduction

In the world of software development, clarity is king. While comments and documentation have their place, the best code is the kind that explains itself. That's the essence of self-documenting code; writing code that's so clear, it barely needs comments.

In this post, we'll look at two simple but powerful techniques to make your code more readable and maintainable.

Naming That Explains

Poorly named variables and methods are one of the fastest ways to make code unreadable. Consider this simple example:

var d = DateTime.Now.AddDays(7);
if (u.IsActive && d > u.Reg)
{
    Send(u);
}

Now compare it to this:

var expirationDate = DateTime.Now.AddDays(7);
if (user.IsActive && expirationDate > user.RegistrationDate)
{
    SendWelcomeEmail(user);
}

Same logic. Vastly improved clarity.

Tips for Better Naming

  • Use descriptive nouns for variables: invoiceTotal, userEmail, retryCount
  • Use verbs for methods: CalculateTax(), SendReminder()
  • Use predicate-style names for boolean methods: IsExpired(), HasPermission(), CanRetry()
  • Abbreviations are fine when they’re widely understood or when the full name would be unwieldy
  • Don’t be afraid of longer names if they improve clarity, but use judgment. Sometimes overly long names can hurt readability more than they help

Extracting Logic into Named Methods

Another way to make code self-explanatory is to extract blocks of logic into well-named methods. This not only improves readability but also makes testing and reuse easier.

Before:

if (user.IsActive && DateTime.Now.AddDays(7) > user.RegistrationDate)
{
    SendWelcomeEmail(user);
}

After:

if (ShouldSendWelcomeEmail(user))
{
    SendWelcomeEmail(user);
}

...

private bool ShouldSendWelcomeEmail(User user)
{
    var expirationDate = DateTime.Now.AddDays(7);
    return user.IsActive && expirationDate > user.RegistrationDate;
}

Now the if statement reads like a sentence. The logic is tucked away in a method with a name that explains its purpose.

When Comments Are Appropriate

Self-documenting code reduces the need for comments, but it doesn't eliminate them entirely. Sometimes, the why behind a decision isn’t obvious from the code alone.

Here’s an example where a comment adds value:

// We use a 7-day buffer to account for timezone discrepancies in legacy systems
var expirationDate = DateTime.Now.AddDays(7);

This kind of comment explains why something is done, not what is being done. That distinction is key.

Use comments to:
  • Explain business rules or domain-specific logic
  • Justify workarounds or technical debt
  • Provide context for non-obvious decisions
Avoid comments that:
  • Repeat what the code already says
  • Try to explain confusing code that could just be rewritten

The Real Cost of Poor Naming

We recently worked with a client who brought us in to modernize a legacy system riddled with bugs. The biggest challenge? The code was a mess of cryptic variable names, massive methods, and zero structure. Some classes literally had 100,000+ lines of code and if/else nesting 12 levels deep. It was nearly impossible to tell what anything did.

Simple changes like fixing a validation rule or updating a report were taking 10x longer than they should have. Every fix risked breaking something else because the code was so hard to reason about.

The takeaway? Clean code isn't just a nicety. It’s a multiplier for your team's velocity and confidence.

Final Thoughts

Self-documenting code is one of the simplest things you can do to improve your codebase. It's about writing code that's easy to read, easy to change, and easy to trust.

Start with better names. Extract logic into small, focused methods. And when you need to explain a why, leave a thoughtful comment.

Next time you're tempted to write a comment, ask yourself: Can I just make the code clearer instead?

About Latitude 40

Latitude 40 integrates experienced on-shore software development professionals into your organization, forming collaborative teams with or without your existing developers. Together, we identify needs, create tailored software solutions, and instill best practices that drive continuous improvement and ensure agility.

Need help cleaning up a messy codebase or modernizing your system? Let’s talk.

About the Author

Andrew Anderson is the President of Latitude 40 and a lifelong advocate for clean, maintainable code. With over two decades of experience as a developer, analyst, and Agile coach, he's worked globally to help teams build better software and embrace sustainable delivery practices.
View my profile on LinkedIn
0 Comments

Clean Code: The Strategy Pattern in C#

6/30/2025

0 Comments

 

Introduction

In software development, writing code is only half the job. The real challenge (and the real cost) comes later, when that code needs to change.

That's why maintainability is one of the most important qualities of a healthy codebase. Clean, modular code reduces bugs, accelerates onboarding, and supports business agility.

One pattern I find myself using frequently is the Strategy Pattern. It's a simple but powerful way to eliminate brittle if/else logic and replace it with clean, extensible design. In this post, I'll walk through a simple yet realistic example of different order pricing models and show how the Strategy Pattern helps us write code that's easier to understand, test, and evolve.

Setting the Stage: Dynamic Pricing Strategy

Let's say you're building a custom order system for a horticulture company. Different clients get different pricing strategies:
  • Retail customers pay full price.
  • Retail loyalty customers get a percentage discount.
  • Wholesale customers get a volume discount based on order volume.

Before: The Classic If/Else Trap


public class PricingService(IWholesaleDiscountRepository wholesaleRepo,
    LoyaltyProgramSettings loyaltyProgramSettings)
{
    public decimal CalculatePrice(Customer customer, Order order)
    {
        if (customer.Type == CustomerType.Retail)
        {
            return order.BasePrice;
        }
        else if (customer.Type == CustomerType.Loyalty)
        {
            return order.BasePrice * (1 - loyaltyProgramSettings.DiscountRate);
        }
        else if (customer.Type == CustomerType.Wholesale)
        {
            var tiers = wholesaleRepo.GetDiscountTiers()
                                      .OrderByDescending(t => t.MinQuantity);

            var discount = tiers.FirstOrDefault(
                t => order.Quantity >= t.MinQuantity)?.DiscountRate ?? 0m;

            return order.BasePrice * (1 - discount);
        }

        throw new InvalidOperationException("Unknown customer type");
    }
}

What's Wrong?

  • The PricingService is doing too much with its branching logic and different calculations. Real-life scenarios will likely be much more complex too (like wholesale volume discounts are probably based on seasonal order volume, not a single order's volume).
  • Depending on the complexity of your real-world scenario, you may want to apply the Open/Closed Principal, and that is impossible in this example.
  • The logic is not modular - you can't reuse or test pricing strategies independently.

After: Strategy Pattern

Let's refactor this using the Strategy Pattern.

Step 1: Define the Strategy Interface


public interface IPricingStrategy
{
    decimal CalculatePrice(Order order);
}

Step 2: Implement Strategies


public class RetailPricingStrategy : IPricingStrategy
{
    public decimal CalculatePrice(Order order) => order.BasePrice;
}

public class LoyaltyPricingStrategy(LoyaltyProgramSettings loyaltyProgramSettings)
    : IPricingStrategy
{
    public decimal CalculatePrice(Order order)
        => order.BasePrice * (1 - loyaltyProgramSettings.DiscountRate);
}

public class WholesalePricingStrategy(IWholesaleDiscountRepository discountRepo)
    : IPricingStrategy
{
    public decimal CalculatePrice(Order order)
    {
        var tier = discountRepo.GetDiscountTiers()
                        .OrderByDescending(t => t.MinQuantity)
                        .FirstOrDefault(t => order.Quantity >= t.MinQuantity);

        var discountRate = tier?.DiscountRate ?? 0m;
        return order.BasePrice * (1 - discountRate);
    }
}

Step 3: Strategy Factory


public interface IPricingStrategyFactory
{
    IPricingStrategy GetStrategy(Customer customer);
}

public class PricingStrategyFactory(IServiceProvider provider)
    : IPricingStrategyFactory
{
    public IPricingStrategy GetStrategy(Customer customer) => customer.Type switch
    {
        CustomerType.Retail
            => provider.GetRequiredService(),
        CustomerType.Wholesale
            => provider.GetRequiredService(),
        CustomerType.Loyalty
            => provider.GetRequiredService(),
        _ => throw new NotSupportedException("Unknown customer type")
    };
}

Step 4: Application Code


public class OrderProcessor(IPricingStrategyFactory pricingStrategyFactory)
{
    public void ProcessOrder(Customer customer, Order order)
    {
        var strategy = pricingStrategyFactory.GetStrategy(customer);
        var price = strategy.CalculatePrice(order);
        ...
    }
}

Conclusion

The Strategy Pattern isn't just a design tool - it's a way to build systems that are easier to understand, extend, and maintain.

In the example above, we started with a data-driven pricing model that worked, but was tightly coupled and procedural. By refactoring with the Strategy Pattern, we separated concerns and made each pricing rule modular.

This kind of architecture pays off in real-world scenarios:

  • New pricing models can be added without touching existing logic.
  • Each pricing model can be independently unit tested and verified easier.
  • Business rules could potentially be loaded from configuration or external sources.
  • Developers onboard faster and changes are easier/faster, because the code is easier to reason about.

These aren't just technical wins, they're business wins. Clean code reduces risk, accelerates delivery, and supports long-term agility.

About Latitude 40

Latitude 40 integrates experienced on-shore software development professionals into your organization, forming collaborative teams with or without your existing developers. Together, we identify needs, create tailored software solutions, and instill best practices that drive continuous improvement and ensure agility.

Need help modernizing your codebase or designing maintainable systems? Let's talk.

About the Author

Andrew Anderson is the President of Latitude 40 and a seasoned software architect with over two decades of experience in development and Agile delivery. He's worked globally as a developer, analyst, and coach, and is passionate about helping teams build software that is both powerful and maintainable.
View my profile on LinkedIn
0 Comments

I have a couple thoughts... CQRS and custom development

5/23/2015

0 Comments

 
When marketing asks me to write “thought leadership” blogs, I generally cringe. I’m a developer at heart and not necessarily a thought leader. I think about ways to use my skills as a technologist and a software developer to create thoughtful and impactful software applications that help businesses grow, evolve, better use processes and so on. Thought leader? Meh.

But then the thought struck me. What if, with all of the applications Latitude 40 has built over the years, I did have some ideas I could impart. I mean, the reason why I got into custom software development – and built a business around it – in the first place, was because I saw and experienced the limitations of forcing businesses to all run on the same packaged platforms. Businesses should have the option of evolving unique internal processes  as a way to compete. Now granted, there are some circumstances where packaged applications make complete sense; word processing is one example. But even in that instance, it could be argued that organizations could deploy many open source and custom developed word processing applications that better fit a particular industry or use case.

But I digress…

Here in the Denver area, we have a burgeoning custom software development community and I regularly attend and participate in developer forums and conferences so that I can keep up to date on some of the latest and greatest that’s out there and what my developer colleagues are working on within some of the most well-known brands. One such architecture is Command Query Responsibility Segregation (CQRS). Now for those of you in the business community, you may have no idea what that means – but you can absolutely benefit from it. My developer colleagues will say, “but that’s been around for a while now… what’s so new about that?”

To start, let me give a quick rundown of what CQRS is and then put it into context. The main purpose of CQRS is to assist in building high performance, scalable systems potentially with large amounts of data. The pattern states there should be complete separation between "commands" that perform actions and "queries" that read data. Traditionally, all of these functions are built into a single set of components utilizing a single data store.  CQRS has you build those commands and queries out into completely separate architectures to avoid having one side bottlenecking the other.

So what does this mean to a business person? Well, simply put, this concept is solving traditional software problems in a new way that can help provide higher transactional volume and greater performance with lower infrastructure costs. Let’s take an online merchant as an example. As we know, a customer does many different things when deciding what (and when) to make a purchase. They may browse inventory, put items into their online shopping cart, take other items out, set up multiple shipping addresses and payment methods, etc. These are mainly actions, but the user requires sophisticated pages presenting all the information they need (by querying) to decide how to act. By keeping the queries needed to generate these user-friendly pages segregated, it’s easy to optimize them for ultimate performance without having to fit action requirements into the same models.  As you can probably imagine, all of these “queries” and “commands” put a lot of strain on the system. By segregating these types of actions, not only can the online merchant improve the overall customer experience, they can also optimize the infrastructure for each by adding/removing cloud resources, as needed, for the queries and commands separately as transactional volume increases/decreases.

Another idea which I find fascinating is the strengthening of task-based user interfaces (UI). Let’s go back to the example of the order. A tasked-based UI is about streamlining the user experience and also capturing the “intent” of any user interaction. Those events – changes to shipping addresses, added/dropped shopping cart items and so on – can be recorded and saved in a way that can be analyzed to better understand the customer and their intentions.

These types of analysis certainly sound like business intelligence of old. Does this mean that business intelligence is back? It certainly is but from a more interesting perspective. As we know, BI is an amazing concept and data mining tools are now extremely prolific. In terms of actually implementing it, things have fallen really short or have only been available to the largest of companies because of its complexity and cost. Also, these mining tools are great, but are only as valuable as the data available to mine. By storing all of these events as a historical representation of state, you gain the ability to recreate the entire history of that aforementioned order at will, allowing you to analyze it at every single state it’s ever been in, not just its current state. This gives us the ability to ask questions in the future that we might not even be thinking about asking now. Utilizing these techniques, what was super expensive and difficult just a decade ago is now possible for most businesses.

I really enjoy looking at these types of business challenges with technology. Custom application development is happening now at some of the largest online merchant organizations, one’s that sponsor Thanksgiving Day Parades and others that had their start in selling shoes. The great thing for businesses that may not have a multi-million dollar development budget is that they can take advantage of these types of solutions now as well. What I always tell people who are looking to solve these types of business challenges with technology is to partner with someone who will be your trusted advisor for the long term and someone who can see the bigger picture, rather than a one-off development project. 

I’m interested to hear what you think about where custom software could take businesses and organizations.
0 Comments

    Categories

    All
    Agile
    Claris
    Clean Code
    Custom Vs. Off The Shelf
    On-shoring
    Technical
    Tech Strategy

    RSS Feed

Copyright © 2025 Latitude 40 Consulting, Inc.  All rights reserved.
Latitude 40® is a trademark of Latitude 40 Consulting, Inc. All other trademarks are the property of their respective owners.
Picture
11001 W. 120th Ave. ​Suite 400
Broomfield, CO 80021
303-544-2191
CONTACT US
privacy policy
terms of service
blog index
customer login
  • Home
  • About Us
    • About
    • Our Process
    • Quality Please
    • What Is Custom Software?
  • Services
  • Case Studies
  • Buzz
    • Blog
    • Testimonials
  • Contact