The UnboundID LDAP SDK for Java 4.0.6 is now available. It fixes a few bugs and makes improvements in connection pooling, referral following, and managing certificates.
Ludovic Poitou reports that ForgeRock Directory Services 6.0 is now available, with support for entry expiration, sorting based on field values within JSON objects, offline configuration changes, reduced disk space usage, and performance improvements. Release notes are available at https://backstage.forgerock.com/docs/ds/6/release-notes/.
LDAP directory servers often contain sensitive data, including personally identifiable information about individuals, user passwords, account details, etc. It’s critical for administrators to configure the server so that access to all of this information is restricted and only accessible to clients that are legitimately authorized to interact with it. However, it’s also essential for the applications interacting with that data to protect that data.
This is the first in a series of blog posts that will provide tips for securing your LDAP-enabled applications. In this post, I’ll discuss securing the communication between the client and the server.
Always use TLS-encrypted communication
It’s 2018. Unencrypted communication shouldn’t be a thing anymore. Even if all your communication happens on a locked-down private network, there’s no good reason for traffic between clients and servers (and also between servers and servers, for that matter) to pass in the clear.
There are a few options for encrypting communication between clients and servers:
- Establish a connection to a server port that requires TLS (transport layer security, the successor to SSL) for all communication.
- Establish an unencrypted connection to the server and then use the LDAP StartTLS extended operation to convert the connection from insecure to secure.
- Establish an unencrypted connection to the server and then use a SASL bind request that protects the provided credentials and supports a “confidentiality” quality-of-protection to encrypt all communication after that point.
- Use some other external mechanism (like IPsec or stunnel) to encrypt the communication in a manner that the client doesn’t require any specific knowledge of the encryption in the client.
Of these options, the first one is the best and is really the only one that you should consider. It’s the gold standard for secure communication and the best supported by both clients and servers. It doesn’t incur that much performance overhead, and it doesn’t have to cost anything. You can get free certificates signed by a legitimate authority through the Let’s Encrypt service, or if you only have internal clients to worry about, you can maintain your own certification authority so that you can use longer-term certificates signed by a common issuer. Self-signed server certificates are also an option, and the communication is just as secure when using them as when using a certificate signed by a commercial authority, but self-signed certificates don’t give the client the same set of options when it comes to deciding whether to trust the server.
Both StartTLS and SASL confidentiality require the communication to be initially unencrypted, and therefore at least that part of the communication is subject to observation and undetectable manipulation. StartTLS and SASL confidentiality also allow for the possibility that the security layer could be closed while leaving the underlying connection established, which would cause the communication to revert to an insecure state. On top of that, SASL confidentiality is not as widely supported as TLS or the StartTLS extended operation, and it doesn’t necessarily have the same level of support for authenticating the server to the client.
Using an external security mechanism like IPsec is the least desirable option. The client and server will likely not know anything about the encryption at all, which also means that they won’t know if that encryption isn’t in place. A misconfiguration could leave the communication exposed, and neither party would know about it. It also means that the server might not permit certain operations that are only allowed for clients using a secure connection because it can’t tell that there is a security layer in place.
Although you might be tempted to only secure things like bind operations and password changes, you should resist that temptation. It’s true that you want to encrypt credentials, but that’s not the only sensitive information that’s stored in the server. Clients might need to retrieve or update that data, and it’s much easier to just encrypt everything than to try to decide which data needs to be protected and which is okay to go in the clear. Securing everything protects against mistakes where the client might have inadvertently transferred data in the clear. Further, TLS provides more benefit than just encryption; it’s also got a trust mechanism built into it so, so clients can be more confident that they are actually communicating with the legitimate server and not some impostor.
Only support strong TLS protocols and cipher suites
When establishing a TLS-encrypted connection, the client kicks off the negotiation process by sending a TLS client hello message. Among other things, this client hello message specifies the maximum TLS protocol version that the client supports and the cipher suites that the client is willing to use. The server ultimately decides which TLS protocol version and cipher suite will be used for the communication (and it tells the client what it has chosen in the server hello message that it sends back in response to the client hello), but the client can decide whether it wants to accept what the server has chosen. Clients should only accept sufficiently strong cryptography to ensure that the communication really is secure.
Note that if you’re using Java, you might need to install a custom policy file to enable support for the strongest encryption options. Because of stupid political reasons, companies based in the United States aren’t allowed to make the strongest forms of encryption available to certain blacklisted countries. As such, many Java versions ship with a default configuration that doesn’t allow anyone to use these strongest encryption algorithms, and you have to install a special policy file that unlocks the strong encryption options. Search for something like “Java unlimited strength jurisdiction policy files” to learn more information about this. Fortunately, the most recent Java releases (Java 9 and later, and the most recent builds of Java 8) ship with support for strong encryption enabled by default, so if you’re using them, then you might not need to do anything.
For the protocol version, clients should never accept anything below TLSv1. If you can get away with it, then only supporting TLSv1.1 and TLSv1.2, or maybe even only TLSv1.2, is even better. The upcoming TLSv1.3 looks like it has even further improvements, so you should probably look into using it once it becomes available. SSLv3 is considered broken and should not be used, and SSLv2 is even worse. The client hello message should include the maximum protocol version that the client is willing to use, and the server is supposed to pick the highest version it supports that is less than or equal to the version that the client provided. Once that’s done the client should make sure to check the negotiated protocol version and ensure that it is acceptable.
For the cipher suites, the server is supposed to pick the strongest suite that the client mentions in the client hello message, but the client should check to make sure that the server actually picked one of those and not something weaker. Nevertheless, the client should only suggest cipher suites that support strong encryption.
When choosing which suites to exclude, you should consider the following:
- Don’t include any suites that use a null symmetric cipher because they don’t actually encrypt anything. These generally include “_WITH_NULL” or “_WITH_ENULL” in the name of the suite.
- Don’t include any suites that use export-level encryption because export-level encryption is very weak. These generally include “_EXPORT” in the name of the suite.
- Don’t include any suites that use the IDEA, RC4, or single-DES symmetric ciphers. These ciphers are all known to be weak. These generally include “_IDEA”, “_RC4”, “_ARC4”, “_ARCFOUR”, or “_DES” in the name of the suite.
- Don’t include any suites that use the weak MD5 digest algorithm. These generally include “_MD5” in the name of the suite.
- Don’t include any suites that support anonymous key exchange because they don’t use a certificate and therefore don’t offer a trust mechanism. These generally include “_ANON”, “_ANULL”, “_ADH”, or “_AECDH” in the name of the suite.
- Don’t include any suites that use a non-ephemeral Diffie-Hellman cipher because they don’t support forward secrecy. These generally include “_DH” or “_ECDH” in the name of the suite.
- Don’t include any suites that use a non-RSA key with an ephemeral Diffie-Hellman cipher. OWASP recommends only using RSA keys with ephemeral Diffie-Hellman ciphers because DSA and DSS keys can be weak. These generally include “_DHE” or “_ECDHE” in the name of the suite and don’t include “_RSA”.
When choosing which suites to include, you should consider the following:
- Prefer suites that support forward secrecy over those that don’t. Forward secrecy ensures that the encryption remains secure even if the certificate’s private key is compromised.
- Prefer suites that use an AES cipher over those that don’t (for example, triple-DES), and prefer suites that use 256-bit AES over suites that use 128-bit AES.
- Prefer suites that use DHE over suites that use ECDHE, and prefer suites that use ECDHE over suites that use RSA.
- Prefer suites that use the Galois/Counter Mode (GCM) over suites that use other modes (like CBC). GCM suites use authenticated encryption and provide stronger assurance that the encrypted data has not been altered.
- Prioritize suites in order of the strength of the digest algorithm. 512-bit SHA-2 (although you probably won’t see any suites with this digest algorithm) should be preferred over 384-bit SHA-2, which should be preferred over 256-bit SHA-2, which should be preferred over SHA-1.
- Note that suites that contain “_SCSV” are signaling cipher suite values that indicate support for certain TLS features (for example, TLS_EMPTY_RENEGOTIATION_INFO_SCSV indicates that the TLS implementation supports secure renegotiation). It’s good to include them in the list because they can help ensure that the resulting TLS session uses the best set of options possible.
For example, if you’re using Oracle’s Java 8 build 162, I’d recommend enabling the following cipher suites, in order of most preferred to least preferred:
You may also want to check other resources to help identify the best cipher suite options. The Qualys SSL Labs site provides a helpful tool for evaluating the strength of a server’s (predominantly intended for web servers) TLS implementation, but it also offers helpful documentation for tuning TLS settings. Similarly, OWASP (the Open Web Application Security Project) provides a very informative Transport Layer Protection Cheat Sheet with lots of useful information.
Properly validate the server’s certificate
TLS allows you to ensure that communication between the client and the server is encrypted, but it also offers a second substantial benefit in that it can help the client be confident that the connection is established to the correct, legitimate server. That’s because part of the TLS negotiation involves the server presenting a certificate chain to the client, and the client can then use that chain to verify the identity of the server. Without this step, a client could potentially be tricked into connecting to something that isn’t the legitimate server (e.g., by DNS hijacking or a man-in-the-middle attack), which opens the door for all kinds of badness. For example, a malicious application could act as a simple LDAP proxy server that sits between the client and the real directory server, stealing or altering the communication that passes through, and possibly even injecting its own requests that the server will process as if they had come from the real client.
For a TLS-enabled application, blindly trusting the certificate that a server presents is a big no-no. The types of validation that the client should perform include:
- Make sure that the certificate issuers are trusted. The client or TLS library should have some kind of certificate trust store that has all of the certificates that are considered trusted, or that are at least considered trusted to sign certificates that will themselves be considered trusted. Make sure that at least the certificate at the root of the chain is in that trust store.
- Make sure that the certificate is for the right server. A TLS server certificate should include information about the server(s) with which it is intended to be used. The client should make sure that the address it was given to connect to the server is listed in either the CN attribute of the server certificate’s subject or that it’s listed in a subject alternative name extension. This can help ensure that if a trusted certificate’s private key is compromised, it won’t be trusted for use on any systems other than the ones for which it was originally intended.
- Make sure that the current time is within the validity window for all certificates in the chain. If a certificate is expired, or if it is not yet valid, then it should not be trusted.
- Make sure that none of the certificates have been revoked. If a certificate should no longer be trusted for some reason (for example, if there is reason to suspect that its private key has been compromised, or if the service for which it was originally intended has been shut down), then that certificate should be revoked. When possible, the client should use OCSP (the online certificate status protocol) or a CRL (certificate revocation list) to ensure that none of the certificates in the chain have been revoked.
- Make sure that all of the certificates have valid signatures. Every certificate includes a digital signature that was generated by the certificate that issued it. It can be used to confirm that the certificate was actually issued by the authority that it claims and that the certificate has not been altered in any way since it was generated. If you don’t check the signature, then you can’t trust anything that’s in the certificate.
- Make sure that the certificate uses strong cryptography to minimize the risk that it can be broken. SHA-1 is no longer considered secure, so make sure that the certificate’s signature algorithm uses at least a 256-bit SHA-2 digest. Similarly, 1024-bit RSA is no longer considered secure, so if the certificate has an RSA key pair, make sure that the key size is at least 2048 bits.
- Make sure that the all of the constraints associated with certificate extensions are satisfied. Certificate extensions can provide additional information about the certificate, its issuer, and how it is meant to be used. A subject alternative name extension can be used to list the addresses of the servers on which the certificate can be installed. An authority information access extension can tell the client how to check with an OCSP server, and a CRL distribution points extension can provide the location of a certificate revocation list. A basic constraints extension can indicate whether a certificate is allowed to issue other certificates. Key usage and extended key usage extensions provide information about the ways that a certificate is meant to be used.
Some of these checks may be performed automatically by the library that the client is using to provide TLS support, while others may require custom code. Check the documentation for your TLS library to see what it does automatically and what you might need to implement for yourself.
Injection attacks are one of the most common sources of security holes because it’s so easy for an unsuspecting developer to leave the door open for them. But it’s also usually very easy to prevent them through some pretty simple means.
An injection attack happens when an application uses externally-obtained data in the course of its processing, but without making sure that data is acceptable and safe. It’s especially predominant in cases where an application plugs user input into some kind of a query or command that it sends to some kind of data repository, and doesn’t protect against the possibility that unexpected or malicious user input could cause the application to issue a different request than the one it expected. You’re probably most likely to hear about injection attacks when dealing with SQL (the structured query language, commonly used to interact with relational databases), but it can also affect interaction with other data repositories, like NoSQL databases and even LDAP directory servers.
LDAP directory servers actually have an inherent advantage over many other types of data stores when it comes to injection attacks because LDAP isn’t a text-based protocol and because LDAP APIs typically don’t make it possible to accidentally turn one type of operation into a different kind of operation. SQL injections are particularly dangerous because it’s possible for an SQL statement intended to just read some data from the database to be inadvertently converted into one that destroys, corrupts, or otherwise wreaks havoc on the data. This can’t happen in an LDAP injection, but there are still some very real threats that you need to protect against.
LDAP Filter Injections
By far, the most common type of LDAP injection attack is a filter injection. This can happen whenever you construct an LDAP search filter from its string representation and include user-provided data in the process.
For example, consider an application that offers an input field that makes it possible to look up a user by their username or their email address. Such an application might have the following code:
String filter = "(|(uid=" + userInput + ")(mail=" + userInput + "))";
If the user input is “jdoe”, then this will end up creating the filter “(|(uid=jdoe)(mail=jdoe))”. That seems safe enough, right? But instead, let’s consider what would happen if the user were to enter an asterisk instead of jdoe. That would cause the resulting filter to be “(|(uid=*)(mail=*))”, and that would match any entry within the scope of the search that has either at least one of the uid and mail attributes. And what if the user were to enter “jdoe)(objectClass=*”? In that case, the code would create a filter of “(|(uid=jdoe)(objectClass=*)(mail=jdoe)(objectClass=*))”, and that filter would match any entry within the scope of the search, including those that don’t have either the uid attribute or the mail attribute.
The Risks of Filter Injection Attacks
As illustrated above, one of the key risks of a filter injection attack is that it could cause the application to expose more entries, or different kinds of entries, than the application intended to make available. But there are other dangers as well.
Leaking Sensitive Attribute Values
One risk that people don’t often think about is the possibility of using a filter injection attack to leak the values of attributes that contain sensitive information. For example, let’s say that we know that the directory server stores a user’s social security number in the ssn attribute in the user’s entry, and that we want to find out what the social security number is for user jdoe. Let’s also assume that there aren’t any users in the directory that have a uid or mail value of noMatches. If the application constructs a search filter using the code listed above, then we might try entering the following into the input field:
This would result in the application generating the following filter:
This filter would match the entry for user jdoe only if that entry has an ssn value whose first digit is one. If that filter doesn’t match, then we could replace the one with a two, then a three, and so on until we get a match, and then we know what the first digit of the social security number is. Then we can use the same technique to find the second digit, then the third, etc., until we know all nine digits.
Denial of Service Attacks
Filter injection attacks also open the door for very simple and very effective denial of service (DoS) attacks, whether against the application that interacts with the directory server, or against the directory server itself. An injection attack could turn what is expected to be a very efficient filter into one that is very time-consuming to process and takes up a lot of server resources. If you’re able to get enough of those going at the same time, it may eat up all of the available processing cycles in either the application or the directory server so that other requests can’t get through.
Further, if the application is designed to hold all of the entries returned from a search in memory at the same time, a search that returns a lot more entries than expected could cause the application to consume all available memory on the system, which could potentially make the application crash or spend all of its time in garbage collection, or could make the system start paging memory out to disk.
Invoking Operations Against Unintended Entries
We’ve already established that a filter injection attack has the potential to cause a search to match entries that the application didn’t expect to match. If the application makes those entries available to the end user in some way, then the application could be tricked into leaking information to the end user. But what if the application doesn’t simply make those entries available to the end user, but instead does something else with them? What if the application applies some update to each entry that matches the search filter? If you can trick the application into searching for the wrong entries, then that could lead to the application updating the wrong entries, which could cause data loss or corruption.
Defending Against Filter Injection Attacks
There are several ways that you can protect your LDAP-enabled application against filter injection attacks. Some of them are probably very easy to implement. Others may take additional effort. And you might want to implement more than one of these safeguards to take a kind of “belt and suspenders” approach.
Don’t Construct Filters by Concatenating Strings
You should never, never, never, never construct an LDAP search filter by concatenating strings, especially when that string contains any user input. Instead, you should leverage the features that your LDAP library offers to create filters programmatically.
For example, if you’re using the UnboundID LDAP SDK for Java, rather than:
String filter = "(|(uid=" + userInput + ")(mail=" + userInput + "))";
You should instead use:
Filter filter = Filter.createORFilter( Filter.createEqualityFilter("uid", userInput), Filter.createEqualityFilter("mail", userInput));
Much like using an SQL prepared statement, constructing an LDAP filter programmatically ensures that it isn’t possible for crafty input to result in a different kind of filter than the one that was intended.
If you’re using an LDAP library that doesn’t provide a way to programmatically generate search filters, then you should strongly consider selecting a new library to use for LDAP communication. If that’s not feasible, and you have to create filters from their string representations, then you absolutely must sanitize any user input included in the generated filter.
Sanitize User Input Included in Search Filters
There are two basic ways to sanitize user input.
The first is to reject any input that doesn’t appear to be valid. For example, if you have an input field in which you expect the user to provide a username or an email address, then you might only want to allow the input to contain letters, digits, periods, dashes, underscores, plus signs, and the at sign.
The second way to sanitize user input is by escaping any special characters that it may contain. RFC 4514 states that the following escaping must be applied to the assertion value in the string representation of an LDAP search filter:
- The null character (U+0000) must be escaped as \00.
- The left parenthesis must be escaped as \28.
- The right parenthesis must be escaped as \29.
- The asterisk must be escaped as \2a.
- The backslash must be escaped as \5c.
You can escape any other character by placing a backslash in front of the hexadecimal representation for each byte in the UTF-8 encoding for that character, but the above characters are the ones that absolutely have to be escaped.
The LDAP library you’re using may provide a mechanism to do this for you (for example, if you’re using the UnboundID LDAP SDK for Java, then you can use one of the the Filter.encodeValue(String) or Filter.encodeValue(byte) methods), but if it has that, then it’s probably got methods to help you programmatically construct the filter, and using that approach is definitely better than trying to perform your own escaping.
Use an AND Filter To Impose Restrictions on Matching Entries
To prevent a search from matching entries of a different type than you expect, you can wrap the filter inside of an AND that will only match the desired type of entry. For example, if you know that you only want to search for user entries, and you know that all user entries have the person object class, then rather than using a filter like:
You could instead use a filter like:
This will ensure that only user entries will be returned. If you have more specific criteria, then you can include that in the filter as well. Just note that this type of protection on its own isn’t enough to prevent all kinds of injection attacks, since it doesn’t protect against wildcards or new components injected inside the OR. So you’ll still need to make use of one of the other types of protection listed above.
Restrict the Search Request in Other Ways
The search filter is just one of the elements inside an LDAP search request. There are other elements that you may be able to adjust to reduce the effect of processing a search that isn’t exactly what you expected. Some of those are:
- Set the base DN and scope to be as specific as possible to the type of search that you’re performing. For example, if you know that you’re searching for a user, and you know that all users are in a particular branch in the directory (for example, beneath “ou=People,dc=example,dc=com”), then base your search at that branch rather than at the root of the tree, so that the search won’t match any entries outside of that branch.
- Use the size limit to prevent the server from returning more entries than expected. If you’re searching for a single user entry, then use a size limit of one, and the server will return an error result if it finds more than one matching entry.
- Use the time limit to prevent the server from spending too much time processing the search. If you expect your search to be efficient, then setting the time limit to a second or two should be more than enough time to allow the server to process it under normal circumstances, but small enough to ensure that an unexpectedly inefficient search gets cut off before too long.
Leverage the Server’s Access Control Mechanism
Another great type of protection against attacks of all kinds is to ensure that requests are issued under an account that only has permission to do what it’s supposed to do. For example, if you only want the application to be able to search for entries by targeting the uid and mail attributes, then only give the application’s account permission to issues searches targeting those attributes. Similarly, give the application read access only to the attributes that it legitimately needs to get back in search result entries, and give it write access only to the attributes that it legitimately needs to be able to update.
If an application needs to process operations on behalf of another user, then you may want to use the proxied authorization request control (described in RFC 4370) to ensure that those operations are processed in accordance with that user’s access control rights.
LDAP DN Injections
Although filter injection attacks are by far the most prevalent, it is conceivably possible for an LDAP injection attack to target entry DNs. In particular, if an application constructs an LDAP DN from user input, then it may be possible for a malicious user to provide unexpected input that could end up targeting a different entry than was expected.
For example, let’s say that an application has the following code:
String userDN = "uid=" + userInput + ",ou=People,dc=example,dc=com";
If the user enters “jdoe”, then this would construct the following DN:
But if the user enters “jdoe,ou=Secret Users”, then this would construct a DN that is one level below what the application intended:
While theoretically possible, DN injections aren’t all that common or practical. There are two key reasons for this.
First, it’s rare for applications to construct DNs based on user input. Or, at least, it’s rare for well-designed applications to construct DNs based on user input. It’s a very bad thing for an application to assume that DNs have a particular structure or pattern, because not all of them do. And even if DNs have a known format when the application is being developed, it’s entirely possible that the format could change later (for example, to eliminate all personally-identifiable information from DNs), and the application would be broken. It’s far better for an application to search for an entry and learn its DN that way than to try to construct it.
Second, the only kind of injection that you can realistically achieve when constructing a DN is to target an entry deeper in the tree than you had initially expected. You can’t use an injection attack to target any arbitrary entry in the DIT. Fortunately, LDAP DNs don’t have any equivalent to the “..” in a filesystem path that allows you traverse up to its parent.
Nevertheless, if you do encounter a scenario in which you need to construct a DN from user input (for example, if you’re adding a new entry), then you should see if the LDAP library you’re using provides a way to safely construct the DN for you. For example, if you’re using the UnboundID LDAP SDK for Java, instead of using the code:
String userDN = "uid=" + userInput + ",ou=People,dc=example,dc=com";
You should use:
DN userDN = new DN(new RDN("uid", userInput), new RDN("ou", "People"), new RDN("dc", "example"), new RDN("dc", "com"));
This will ensure that all appropriate escaping is done, and will thwart any injection attempt.
If your LDAP library doesn’t have any kind of method like the above for constructing DNs safely, then you should either get a new LDAP library that does provide this support, or you should perform the escaping yourself. The rules for escaping special characters in DNs are a little different from the rules for escaping special characters in a search filter. All the necessary details for constructing the string representation of a DN is provided in RFC 4514, but the basics are:
- You should escape the double quote character as \" or \22.
- You should escape the plus sign character as \+ or \2b.
- You should escape the comma character as \, or \2c.
- You should escape the semicolon character as \; or \3b.
- You should escape the less-than character as either \< or \3c.
- You should escape the greater-than character as either \> or \3e.
- You should escape the backslash character as either \\ or \5c.
- If a value has any leading or trailing spaces, then you should escape those spaces by prefixing them with a backslash or as \20. Spaces in the middle of a value don’t need to be escaped.
- If the value starts with an octothorpe character (#), then you should escape it as either \# or \23. You only need to escape the octothorpe character at the beginning of a value, and not in the middle or at the end of the value.
Welcome to the new new version of LDAP.com. You may or may not remember a few years ago when a new version of LDAP.com was launched by UnboundID Corporation. Shortly after that, UnboundID was acquired by Ping Identity, and despite being very committed to LDAP directory services (note: this is my personal observation; I do not speak for Ping Identity in any official capacity), they have decided to transfer the domain to me instead of maintaining the site themselves.
The “me” in this case is Neil Wilson. I was one of the founders of UnboundID, and now I work for Ping Identity (but again, all thoughts expressed on this site are my own and do not in any way constitute an official statement or position by my employer). I’m a Principal Engineer for LDAP-related products, including the Ping Identity Directory Server, Ping Identity Directory Proxy Server, and UnboundID LDAP SDK for Java. Before Ping and UnboundID, I worked at Sun Microsystems, where I was involved with the development and maintenance of the iPlanet Directory Server and Sun Java Systems Directory Server Enterprise Edition (DSEE), and then I was one of the creators of the OpenDS project. Before Sun, I worked at Netscape as a technical support engineer for the Netscape and iPlanet Directory Server products, and before that, I worked as a server administrator an LDAP-enabled application developer for Caterpillar, Inc. and TidePoint Corporation. All in all, I’ve been doing LDAP in some form or another since 1999, and I still like it a lot.
I’d like to primarily make this a technical site and one that is largely vendor-agnostic. I want to advocate LDAP as a technology, but not any one vendor’s products. I’ve created pages that list the most popular directory server software, LDAP client APIs, and LDAP-related tools, and I hope to keep it up to date. If you know of software that’s actively being maintained that isn’t listed on this site, then please contact me to suggest its inclusion. And if you’re involved with a product that is already listed and you want to let me know about a new release of that product, then feel free to drop me a line and I’ll announce it here.
I’ve got lots of ideas for new content to publish on the site, but it might take some time to clean up the existing content so please bear with me. You can use RSS to be notified of new posts that I add to the site, and you can follow @ldapdotcom on Twitter for information about site updates and other LDAP-related news.