Drupal
  |  19th December, 2017

Art of writing template files - Drupal 8

PURUSHOTAM RAI

When it comes to Drupal 8 theming layer, there is a lot Drupal 8 offers. Few concepts that come to mind while thinking of Drupal 8 theme layer include Renderable Array, Cacheabilty, Cache Context, Cache Tags, Twig and Preprocessors. Some of these are improvements of old concepts, while others are new introduction in Drupal 8. In this post, I'll share my experience on how to best utilise these concepts for a robust and performant frontend with Drupal 8.

To get best out of this post, you should be comfortable with:

  1. Drupal 8 #cache
  2. Twig Debug
  3. Preprocessor functions

We will focus on the following concepts:

Renderable array caching, walk through Drupal 8 caching power and Cache contexts

Drupal 8 comes with a lot of changes and almost all of us are even familiar with these changes. So, now it’s high time we start exploiting these to get best out of Drupal 8. It’s very important to understand how Drupal 8 caching works. Go through drupal.org documentation besides implementing the same once thoroughly on a vanilla Drupal instance. At the end have an answer to all these questions:

  • What is a renderable array?
  • Understand that every renderable array is cacheable.
  • One Renderable array can consist of other renderable arrays (nesting):
    So it’s like, a page is a renderable array which actually consists of other renderable arrays: region (renderable) arrays and in a region, we can have block (renderable array). The main point to understand here is hierarchy and nesting and the caching among this.
    Reference: https://www.drupal.org/docs/8/api/render-api/cacheability-of-render-arrays
  • Making use of cache context:
    A general mistake I have seen many of us perform is using

#cache = max-age => -1

There come few scenarios where we cannot use direct caching but that particular array can be cached on per user/URL basis. Especially for those cases, we have cache contexts. It’s very easy concept and you would love once you start using this, there are several other bases on which you can decide cache-ability of a renderable array.
Reference: https://www.drupal.org/docs/8/api/cache-api/cache-contexts
 

Selection of template files/preprocessor functions and placement of the same.

With great flexibility, comes responsibility. Most of the time, we come across scenarios where we need to do tweaks to field output based on certain requirements. Drupal provides us handful of ways to do so, an important point here is to know and analyze what will be the best way to achieve expected results for the case. So, generally, we follow the following rule for customizations:

Tweak data -> Use Preprocess level alter
Tweak Markup -> Use Twig level customizations

Hence, when we have to do some custom work where we have to alter data conditionally, we make use of preprocessor functions. Consider overriding display of date field value based on certain conditions. In that case, a general approach we may take is to write hook_preprocess_field. But we need to consider that this preprocessor will run for all fields, which will definitely affect the performance. In this case, writing a specific preprocessor for date field makes more sense.

One more important thing regarding hook_preprocess_HOOK:

Consider a scenario, where we are using node.html.twig for node and then we have to do some alteration for a specific content type teaser view.

drupal8-twig

In that case creating specific twig and then using the specific preprocessor (node__content_type__teaser) for alteration is an extra step, instead, we can directly use the preprocessor hook_node__content_type__teaser without writing the specific twig file. So, for a generic twig, we can make use of specific hooks as suggested by twig debug too which definitely gives better performance.

Regarding placement of these preprocessors, it is completely based on the project requirement and usage. We place them in the module, in case the functionality to be provided should work on a modular approach. Placement of Preprocessors will work both in module and theme, while for twig files placing them in the module may need hook_theme_registery_alter based on the twig file we are overriding.

Use of Drupal Attributes

It’s not recommended to hardcode drupal attributes (HTML attributes like id, class) in twig files and the reason for this is: there are several modules which perform alteration of drupal attributes and make use of Drupal attributes and will no longer work if we hard code this. One example I can think of is Schema.org module which provides RDF mapping through attributes only.

Moreover, I would always suggest doing styling based on default classes provided by Drupal. Drupal by default provides appropriate classes both generic and specific and id for better styling. It is our duty, to make better use of them by understanding Drupal way. It also helps in saving project time and cost especially when we are following best practices and Drupal way of doing things. Drupal Core and Contrib are the best examples of how to proceed further on this. Also, following a proper structure makes possible use of styling written in Drupal core itself.

Logic in template/twig files - Yes/No?

In Drupal 7 To speed up the output process, it’s always recommended to avoid writing logic in template files. Instead, as discussed earlier consider writing preprocessors for this purpose. But with Drupal 8 adopting Twig Theme Engine things are different now. Doing tradeoff among the twig and preprocess (PHP) is mainly centric over performance concerns. We basically need our site to be faster. Drupal 7 PHP Engine used PHP theme engine and hence we recommended avoiding logic in template files, but in D8 with twig into effect, we have a faster theme engine with several twig filters, functions to be used. So, here we categorize our logic in 2 ways based on the type of work to be accomplished: soft logic and hard logic.

Soft Logic: Consider a scenario where we need to display comma separated values. Now, in this case, instead of writing a preprocessor for altering data we should use available twig filter "safe_join". It is definitely faster than the traditional way of using PHP preprocessor.

Hard Logic: In the above case, let's say if the value is taxonomy term and we need to print all the parent taxonomy term too. Then we need to load parent terms and we can then join them for final display, then this preprocessing should go in preprocessor only.
 

Frontend / Backend Collaboration

One more important thing I’ve observed is, it is very important to have a good collaboration between frontend and backend developers during the project. Sometimes as a frontend developer we come across weird requirements, in that case, it is very important to sit together to understand requirements and get the things done in the Drupal way to achieve best out of Drupal.

Also, go through this official guide to understand best practices for Drupal theming: https://www.drupal.org/docs/8/theming/twig/twig-best-practices-preprocess-functions-and-templates

Looking for a Drupal partner ?

We are drupal 8 ready

Drupal
  |  6th December, 2017

Google Assistant Integration with Drupal

PURUSHOTAM RAI

The Rise of Assistants

In last couple of years we have seen the rise of assistants, AI is enabling our lives more and more and with help of devices like Google Home and Amazon Echo, its now entering our living rooms and changing how we interact with technology. Though Assistants have been around for couple of years through android google home app, the UX is changing rapidly with home devices where now we are experiencing Conversational UI i.e. being able to talk to devices, no more typing/searching, you can now converse with your device and book a cab or play your favourite music. Though the verdict on home devices like Echo and Google home is pending, the underlying technology i.e. AI based assistants are here to stay.

In this post, we will explore Google Assistant Developer framework and how we can integrate it with Drupal.

Google Assistant Overview


Google Assistant works with help of Apps that define actions which in turn invokes operations to be performed on our product and services. These apps are registered with Actions on Google, which basically is a platform comprising of Apps and hence connecting different products and services via Apps. Unlike traditional mobile or desktop apps, users interact with Assistant apps through a conversation, natural-sounding back and forth exchanges (voice or text) and not traditional Click and Touch paradigms. 

The first step in the flow is understanding use requests through actions, so lets learn more about it. 

How Action on Google works with the Assistant?

It is very important to understand how actually actions on Google work with the assistant to have an overview of the workflow. From the development perspective, it's crucial we understand the whole of the Google Assistant and Google Action model in total, so that extending the same becomes easier.

 

Actions on Google

 

It all starts with User requesting an action, followed by Google Assistant invoking best corresponding APP using Actions on Google. Now, it's the duty of Actions on Google to contact APP by sending a request. The app must be prepared to handle the request, perform the corresponding action and send a valid response to the Actions on Google which is then passed to Google Assistant. Google Assistant renders the response in its UI and displays it to the user and conversation begins.

Lets build our own action, following tools are required:

  • Ngrok - Local web server supporting HTTPS. 
  • Editor - Sublime/PHPStorm
  • Google Pixel 2 - Just kidding! Although you can order 1 for me :p
  • Bit of patience and 100% attention

STEP1: BUILD YOUR ACTION APP

Very first step now is building our Actions on Google APP. Google provides 3 ways to accomplish this:

  1. With Templates
  2. With Dialogflow
  3. With Actions SDK

Main purpose of this app would be matching user request with an action. For now, we would be going with Dialogflow (for beginner convenience). To develop with Dialogflow, we first need to create an Actions on Google developer project and a Dialogflow agent. Having a project allows us to access the developer console to manage and distribute our app.

  1. Go to the Actions on Google Developer Console.
  2. Click on Add Project, enter YourAppName for the project name, and click Create Project.
  3. In the Overview screen, click BUILD on the Dialogflow card and then CREATE ACTIONS ON Dialogflow to start building actions.
  4. The Dialogflow console appears with information automatically populated in an agent. Click Save to save the agent.

Post saving an agent, we start improving/developing our agent. We can consider this step as training of our newly created Agent via some training data set. These structured training data sets referred here are intents. An individual Intent comprises of query patterns that a user may ask to perform an action, events and actions associated with this particular intent which together define a purpose user want to fulfill. So, every task user wants Assistant to perform is actually mapped with an intent. Events and Actions can be considered as a definitive representation of the actual associated event and task that needs to be performed which will be used by our products and services to understand what the end user is asking for.

So, here we define all the intents that define our app. Let's start with creating an intent to do cache rebuild.

  1. Create a new intent with name CACHE-REBUILD.
  2. We need to add query patterns we can think of, that user might say to invoke this intent. (Query Patterns may content parameters too, we will cover this later.)
  3. Add event cache-rebuild.
  4. Save the intent.
Intent Google Actions

 

For now, this is enough to just understand the flow, we will focus on entities and other aspects later. To verify if the intent you have created gets invoked if user says “do cache rebuild”, use “Try it now” present in the right side of the Dialogflow window.

STEP2: BUILD FULFILLMENT

After we are done with defining action in dialogflow, we now need to prepare our product (Drupal App) to fulfill the user request. So, basically after understanding user request and matching that with an intent and action Actions on Google is now going to invoke our Drupal App in one or the other way . This is accomplished using WEBHOOKS. So, Google is now going to send a post request with all the details. Under Fulfillment tab, we configure our webhook. We need to ensure that our web service fulfills webhook requirements.

According to this, the web service must use HTTPS and the URL must be publicly accessible and hence we need to install NGROK. Ngrok exposes local web server to the internet.

NGROK

 

After having a publicly accessible URL, we just need to add this URL under fulfillment tab. As this URL will receive post request and processing will be done thereafter, so we need to add that URL where we are gonna handle requests just like endpoints. (It may be like http://yourlocalsite.ngrok.io/google-assistant-request)

add webhook url

 

Now, we need to build corresponding fulfillment to process the intent.

OK! It seems simple we just need to create a custom module with a route and a controller to handle the request. Indeed it is simple, only important point is understanding the flow which we understood above.

So, why are we waiting? Let’s start.

Create a custom module and a routing file:

droogle.default_controller_handleRequest:
 path: '/google-assistant-request'
 defaults:
   _controller: '\Drupal\droogle\Controller\DefaultController::handleRequest'
   _title: 'Handle Request'
 requirements:
   _access: TRUE

Now, let’s add the corresponding controller

<?php

namespace Drupal\droogle\Controller;

use Drupal\Core\Controller\ControllerBase;
use Drupal\Core\Logger\LoggerChannelFactoryInterface;
use Symfony\Component\DependencyInjection\ContainerInterface;
use Symfony\Component\HttpFoundation\JsonResponse;
use Symfony\Component\HttpFoundation\RequestStack;

/**
* Class DefaultController.
*/
class DefaultController extends ControllerBase {

 /**
  * Symfony\Component\HttpFoundation\RequestStack definition.
  *
  * @var \Symfony\Component\HttpFoundation\RequestStack
  */
 protected $requestStack;

 /**
  * The logger factory.
  *
  * @var \Drupal\Core\Logger\LoggerChannelFactoryInterface
  */
 protected $loggerFactory;

 /**
  * Constructs a new DefaultController object.
  */
 public function __construct(RequestStack $request_stack, LoggerChannelFactoryInterface $loggerFactory) {
   $this->requestStack = $request_stack;
   $this->loggerFactory = $loggerFactory;
 }

 /**
  * {@inheritdoc}
  */
 public static function create(ContainerInterface $container) {
   return new static(
     $container->get('request_stack'),
     $container->get('logger.factory')
   );
 }

 /**
  * Handlerequest.
  *
  * @return mixed
  *   Return Hello string.
  */
 public function handleRequest() {
   $this->loggerFactory->get('droogle')->info('droogle triggered');
   $this->processRequest();
   $data = [
     'speech' => 'Cache Rebuild Completed for the Site',
     'displayText' => 'Cache Rebuild Completed',
     'data' => '',
     'contextOut' => [],
     'source' => 'uniworld',
   ];
   return JsonResponse::create($data, 200);
 }

 protected function processRequest() {
   $params = $this->requestStack->getCurrentRequest();
   // Here we will process the request to get intent

   // and fulfill the action.
 }
}

Done! We are ready with a request handler to process the request that will be made by Google Assistant.

 

STEP3: DEPLOY FULFILLMENT AND TESTING THE APP

Part of the deployment has already been done, as we are developing on our local only. Now, we need to enable our custom module. Post that let's get back to dialogflow and establish the connection with app to test this. Earlier we had configured fulfillment URL details, ensure we have enabled webhook for all domains.

deployment

 

Let’s get back to intent that we build and enable webhook there too and save the intent.

intent enable webhook

 

Now, to test this we need to integrate it any of the device or live/sandbox app. Under Integrations tab, google provides several options for this too. Enable for Web Demo and open the URL in new tab, to test this:

Integration Web Demo

 

Speak up and test your newly build APP and let Google Assistant do its work.

So, as seen in the screenshot, there can be 2 type of responses. First, where our server is not able to handle request properly and the second one where Drupal server sends a valid JSON response.

GREAT! Connection is now established, you can now add intents in Google Action APP and correspondingly handle that intent and action at Drupal End. This is just a taste, conversational UX and Assistant technology will definitely impact how we interact with technology and we believe Drupal has a great role to play as a robust backend.

Looking for a Drupal partner ?

We are drupal 8 ready

Drupal
  |  22nd November, 2017

Override existing Configuration entity types - Drupal 8

neha

Why do we need to override Config Entity Types?

  1. By default, Vocabulary list displays all the vocabularies. In case we want to restrict certain roles from viewing certain vocabularies. Overriding that Class(VocabularyListBuilder) function would be the solution to display specific/no/all vocabularies.

  2. Let's assume we need to specify vocabulary-path for each vocabulary apart from name, title, description, vid etc. In this case we would need to override the default Vocabulary Form of taxonomy_vocabulary config entity type.

  3. Suppose we want to custom access check for views on the basis of role/user/views operation or whatever, we would need to override ViewsAccessControllerhandler of view configEntityType and write our own logic.

  4. Another use case can be, if we want to display all the image fields which use image style being deleted, on confirm text message, we again need to override ImageStyleFlushForm class and redefine getconfirmText function.

In short, to customise and meet our dynamic requirements which may not be supported by config entity type definition as a part of @ConfigEntityType annotations in core or contributed modules, we need to override existing config entity types and write some custom code :).

How can we override Config Entity Types?

Entity types use object based annotation unlike array based annotation which is commonly used. Also, Unlike Content Entity Types where every thing is a field, NOTHING is a field for Configuration Entity type.

Every Drupal config entity type is defined as a particular ConfigEntityType Annotation. Entity controller is completely different from the Controller of MVC pattern. To avoid this confusion in terminology Entity Controllers are termed as handlers, each form related to a particular entity type say taxonomy_vocabulary is declared inside handlers with form key. 

In this article, will take an example of adding custom form elements to config entity type forms to explain this.

In case we need to add a custom element to any of these forms, we need to follow these 2 steps:

I) Set a new handler class specific to that form.

  1. Implement hook_entity_type_alter(array &$entity_types).
  2. Set new handler class as : 
    $entity_types[{id}]->setHandlerClass('form',
     ['{form_type}' => 'Drupal\my_module\MyModuleForm',
     '....',
     '....'
     ]);

    where, id = configEntityType id,  form_type eg: default, reset, delete etc is whichever form we want to override and MyModuleForm is the Class name of new form we'll define in Step II.
    Here is the sample code of overriding default form of taxonomy vocabulary.

    $entity_types['taxonomy_vocabulary']->setHandlerClass('form',
     ['default' => 'Drupal\my_module\VocabularyForm',
     'reset' => 'Drupal\taxonomy\Form\VocabularyResetForm',
     'delete' => 'Drupal\taxonomy\Form\VocabularyDeleteForm'
     ]);

     

II) Define the class set in Step I.

  1. Extend the actual class of the form we want to add new form elements to. 
    use Drupal\taxonomy\VocabularyForm as VocabularyFormBuilderBase;

    {this is optional, we need to do this to keep the new class name same as base class i.e VocabularyForm}.

    class MyModuleForm extends VocabularyFormBuilderBase 

    OR simply, 

    class MyModuleForm extends VocabularyForm

    This override is important because we need to inherit functions and form elements defined in the parent class i.e VocabularyForm and also add additional feature i.e form element without disturbing the core code. This is purely OOPs concept of inheritance.

  2. We need to override the form function by 
    1. Inheriting the parent elements by parent::form(....) and
    2. Defining the new custom elements as the basic example below:
      $form['third_party_settings']['qed42_textfield'] = array(
       '#type' => 'textfield',
       '#title' => t('QED42 Custom Form Element'),
       '#default_value' => $vocabulary->getThirdPartySetting('my_module', 'qed42_textfield', 'Qed42 textfield default value')
      );  

      Config Entities have by default "getThirdPartySetting()" function { Config entities inherit this function if these extend ConfigEntityBase class which implements ConfigEntityInterface interface which in turn extends ThirdPartySettingsInterface interface}. This thirdParty function allows to set and retrieve a value particularly for a module.

    3. Similarly, we can inherit the save function to save the value of newly added form element with Third Party Settings  as : 

      If the form is set as Tree we need to set value as

      $vocabulary->setThirdPartySetting('my_module', 'qed42_textfield', $form_state->getValue('third_party_settings')['qed42_textfield']);

      else :

      $vocabulary->setThirdPartySetting('my_module', 'qed42_textfield', $form_state->getValue('qed42_textfield'));

      and of course inherit the parent save function. 

    4. We can implement the same logic for extending definition of any existing method  AND can also define new functions inside our new Form.

Any Configuration Entity Type (Date format etc) can be overridden similarly, this can be extended to list_builder, access etc.  Apart from overriding, we can also add new flavours of entity controller using the above steps.

 

 

Looking for a Drupal partner ?

We are drupal 8 ready

Design
  |  28th October, 2017

36 Days of fun in 36 Days Of Type Challenge

taniya.pramanik

36 days of type is a project challenging visual artists to create a letter or number a day for 36 days, exploring different media and pushing the boundaries of creative expression. It also marks a great time for illustrators, typographers, and graphic designers to experiment with their craft and it also provides a platform to make a statement with one’s work.

36 days of type (letters)

This was our first time participating in the worldwide phenomenon of #36DaysOfType2017 - fourth edition, and we were more than excited to take up this challenge! Since it is an open challenge and the amount of experiment that we could do was infinite, it took some time to decide the theme that we would be choosing. We wanted to give this project an illustrative spin and also wanted to showcase India’s cultural extravaganza. That’s how we decided to illustrate the performing arts of India through letters and numbers. These performing arts include dance, music, theatre and martial arts practiced in all the states of India.

This series is meant to highlight India and the diverse manifestations of India’s cultural beauty. It also aims to identify even the art forms that many people are not aware of or the dying art forms of our country. The letters have been directly or indirectly represented as a form of dance, music, theatre or martial arts while the numbers have been represented as Indian classical musical instruments.

Why we took this challenge

We’ve been asked a few times - what was the reason behind taking up the 36 days of type challenge? Well, to describe it in a sentence, creative expression knows no bounds and we should not let any opportunity slip away that lets this creative energy expand.

However, we had more reasons to participate in this challenge. It requires everyone who is participating, to post something everyday. This helps us to be pragmatic, sensitive to time and maintain a design discipline. It is a challenge to express our creative thoughts being within certain limitations. Letters and numbers have a predefined structure and form, which defines a boundary of expression. Thus, this challenge is a great way to express with shapes, forms and constitutional limitations.

Since, we were showcasing performing arts of India, all the characters required extraordinary amount but time bound research which made us better in researching about a topic in a short span of time. All the ideas that we build in our head might go to waste if they are not transformed into a tangible form.

And in order to do so, we must present these ideas properly for the world to see our perspective. Attention to details is a concept well understood in design community but often overlooked and our objective is to strike a balance between the ideas and details.

Our experience

The development of this series involved an intensive research about the various performing art forms, studying their aspects and brainstorming sketches for the same. Some days we were successful in creating a great blend between letter form and the art form, others were a struggle, but we racked our talented brains until we were happy with our design.

36 days of type (numbers)

We had a lot of fun playing around with the letters and numbers. We would give and take feedback from each other too, and learned a lot of new things in the process. Some of our favourite letters from the series are ‘B’ (Baul folk music from West Bengal), ‘K’ (Kathputli from Rajasthan), ’T’ (Theyyam, a colourful ritual dance of Kerala).

We enjoyed every minute of the journey and looking forward to doing it again!

 

Design with us !

Let's make something great together .

Design
  |  30th September, 2017

Experiencing DrupalCon Vienna

Archita Arora

The calls for sessions for DrupalCon Vienna had just closed and all of us who had submitted their sessions were waiting, eager to find out about the result. It was just a usual day at the office and I get an email from the Drupal Association. I opened the mail and what I saw wanted to make me jump out of my chair! Yes, my selection for DrupalCon was confirmed and I was so excited! It was an unbelievable situation for me because it was quite unexpected, not that I was under confident about myself but to be a young member in the Drupal Community, and getting selected amongst so many talented and experienced people out there, is sure to make anyone feel surprised. With only an experience of 1.5 years of working with QED42 as a user experience designer, this seemed like a dream come true for me.

I’ve spoken publicly before, mostly in the context of design, but preparing for a Drupal conference was a whole new affair for me.

My excitement had not ceased but there was a slight fear building up, of talking in one of the biggest conferences. I had to prepare myself to speak in front of an audience that had a lot more experience in Drupal than I had.

Being a young designer, the whole idea of my talk was to put forward a fresh perspective on designing for Drupal.

I positively had a feeling that the selection of my session happened to get a fresh point of view from a person who was new to the Drupal community and as a designer could help contribute to bring a positive change to the Drupal world.

I wanted to put forward the smallest of the details of my experience - in a manner that could successfully communicate my opinions to the versatile audience - which could be anyone from a developer to a project manager.

I wrote down my thoughts and insights I encountered during the process of creating my slides. While preparing for the talk I realised that I had gained a good amount of information which would help me further in presenting my topic.    

Understanding the main themes of the conference helped me shape the focus of my talk. Speaking for the first time in front of a technical audience at a conference can be intimidating where some audience members may have more knowledge than you about your subject. Preparation is crucial. I was also getting a helpful feedback while preparing, particularly about expanding the content to make it more relevant.

I had prepared myself for the best and the worst. It was about time that I began my presentation. The first few minutes of the talk were the toughest but as I went on sharing my knowledge I became less anxious. I realised that speaking at a conference is a great way to share knowledge and experience and it is quite surprising to see how much we can learn when researching our talk topics. It’s also a great opportunity to network with other professionals in our field, and make some great new friends.

It was a great relief to know that I did not screw up as bad as I thought I might, and maybe my talk helped someone. It was also very rewarding to get feedback from the audience and hear their thoughts on what they had to say.

The entire presentation felt like an out of body experience to me. It took a lot of time and effort to prepare and speak at this conference but it was worth it. And it is good to know that I’m a part of Drupal and I’m being able to participate in whatever way I can, to add to this amazing community.

Travelling alone this far, for the first time, I was scared. But knowing that I am a part of the Drupal community and connected to everybody around me through Drupal gave me a sense of belonging and lifted up my spirits. To my surprise I wasn’t feeling out of place because it felt like a family, our own Drupal family where our collective aim is to work for the betterment and growth of this community. My excitement was at it’s peak till the closing ceremony. And I’m now looking forward to more such opportunities to speak and share stories.


 

Design with us !

Let's make something great together .

Drupal
  |  30th September, 2017

Securing Cookie for 3rd Party Identity Management in Drupal

navneetsingh

We are in an era where we see a lots of third party integrations being done in projects. In Drupal based projects, cookie management is done via Drupal itself to maintain session, whether it be a pure Drupal project or decoupled Drupal project,.

But what when we have a scenario where user’s information is being managed by a third party service and no user information is being saved on Drupal? And when the authentication is done via some other third party services? How can we manage cookie in this case to run our site session and also keep it secure?

One is way is to set and maintain cookie on our own. In this case, our user’s will be anonymous to Drupal. So, we keep session running based on cookies! The user information will be stored in cookie itself, which then can be validated when a request is made to Drupal.

We have a php function to set cookie called setCookie() , which we can use to create and destroy cookie. So, the flow will be that a user login request which is made to website is verified via a third party service and then we call setCookie function which sets the cookie containing user information. But, securing the cookie is must, so how do we do that?

For this, let’s refer to Bakery module to see how it does it. It contains functions for encrypting cookie, setting it and validating it.

To achieve this in Drupal 8, we will write a helper class let’s say “UserCookie.php” and place it in ‘{modulename}/src/Helper/’. Our cookie helper class will contain static methods for setting cookie and validating cookie. Static methods so that we will be able to call them from anywhere.

We will have to encrypt cookie before setting it so we will use openssl_encrypt() php function in following manner:

/**
* Encrypts given cookie data.
*
* @param string $cookieData
*   Serialized Cookie data for encryption.
*
* @return string
*   Encrypted cookie.
*/
private static function encryptCookie($cookieData) {

 // Create a key using a string data.
 $key = openssl_digest(Settings::get('SOME_COOKIE_KEY'), 'sha256');

 // Create an initialization vector to be used for encryption.
 $iv = openssl_random_pseudo_bytes(16);

 // Encrypt cookie data along with initialization vector so that initialization
 // vector can be used for decryption of this cookie.
 $encryptedCookie = openssl_encrypt($iv . $cookieData, 'aes-256-cbc', $key, OPENSSL_RAW_DATA, $iv);

 // Add a signature to cookie.
 $signature = hash_hmac('sha256', $encryptedCookie, $key);

 // Encode signature and cookie.
 return base64_encode($signature . $encryptedCookie);
}
  1. String parameter in openssl_digest can be replaced with any string you feel like that can be used as key. You can keep simple keyword too.
  2. Key used should be same while decryption of data.
  3. Same initialization vector will be needed while decrypting the data, so to retrieve it back we append this along with cookie data string.
  4. We also add a signature which is generate used the same key used above. We will verify this key while validating cookie.
  5. Finally, we encode both signature and encrypted cookie data together.

For setting cookie:
 

/**
* Set cookie using user data.
*
* @param string $name
*   Name of cookie to store.
* @param mixed $data
*   Data to store in cookie.
*/
public static function setCookie($name, $data) {
$data = (is_array($data)) ? json_encode($data) : $data;
$data = self::encrypt($data);
 setcookie($name, $cookieData,Settings::get('SOME_DEFAULT_COOKIE_EXPIRE_TIME'), '/');
}

Note: You can keep 'SOME_COOKIE_KEY' and 'SOME_DEFAULT_COOKIE_EXPIRE_TIME' in your settings.php. Settings::get() will fetch that for you.
Tip: You can also append and save expiration time of cookie in encrypted data itself so that you can also verify that at time of decryption. This will stop anyone from extending the session by setting cookie timing manually.

Congrats! We have successfully encrypted the user data and set it into a cookie.

Now let’s see how we can decrypt and validate the same cookie.

To decrypt cookie:

/**
* Decrypts the given cookie data.
*
* @param string $cookieData
*   Encrypted cookie data.
*
* @return bool|mixed
*   False if retrieved signature doesn't matches
*   or data.
*/
public static function decryptCookie($cookieData) {

 // Create a key using a string data used while encryption.
 $key = openssl_digest(Settings::get('SOME_COOKIE_KEY'), 'sha256');

 // Reverse base64 encryption of $cookieData.
 $cookieData = base64_decode($cookieData);

 // Extract signature from cookie data.
 $signature = substr($cookieData, 0, 64);

 // Extract data without signature.
 $encryptedData = substr($cookieData, 64);

 // Signature should match for verification of data.
 if ($signature !== hash_hmac('sha256', $encryptedData, $key)) {
   return FALSE;
 }

 // Extract initialization vector from data appended while encryption.
 $iv = substr($string, 64, 16);

 // Extract main encrypted string data which contains profile details.
 $encrypted = substr($string, 80);

 // Decrypt the data using key and
 // initialization vector extracted above.
 return openssl_decrypt($encrypted, 'aes-256-cbc', $key, OPENSSL_RAW_DATA, $iv);
}
  1. We generate the same key using same string parameter given while encryption.
  2. Then we reverse base64 encoding as we need extract signature to verify it.
  3. We generate same signature again as we have used the same key which was used to creating signature while encryption. If doesn’t signatures doesn’t matches, validation fails!
  4. Else, we extract initialization vector from the encrypted data and use to decrypt the data return to be utilized.
/**
* Validates cookie.
*
* @param string $cookie
*   Name of cookie.
*
* @return boolean
*   True or False based on cookie validation.
*/
public static function validateCookie($cookie) {
 if (self::decryptCookie($cookieData)) {
   return TRUE;
 }
 return FALSE;
}

We can verify cookie on requests made to website to maintain our session. You can implement function for expiring cookie for simulating user logout. We can also use decrypted user data out of cookie for serving user related pages.

Looking for a Drupal partner ?

We are drupal 8 ready

Drupal
  |  28th September, 2017

Vagrant NFS Sync Problem on macOS High Sierra

navneetsingh

Disclaimer: This is a temporary workaround which worked for me, not a permanent fix and may not work for everyone

Many of us have already tried to enjoy the flavor of new OS by Apple, macOS High Sierra. And doing that might have given you nightmares, if you were using vagrant based project.

If anyone is facing missing files or file changes not getting detected issue inside Vagrant, this might be because of compatibility issues with Apple’s latest APFS (apple file system) and Vagrant’s synced folder type: nfs.

I experienced this a couple of weeks ago, when I upgraded to beta version of masOS High Sierra. I was not able to see few modules (like views, taxonomy) and files present inside vagrant (guest) even though these were actually present in my mac machine (host). This is an already reported issue.

After struggling for next few days and nights, I found the quickest possible solution for this problem which worked for me.

Please follow the steps mentioned below:

  1. Install the vagrant gatling rsync plugin.
    vagrant plugin install vagrant-gatling-rsync
  2. Do changes in VagrantFile as mentioned here : https://github.com/smerrill/vagrant-gatling-rsync

    Add the following part only:
    # Configure the window for gatling to coalesce writes.
     if Vagrant.has_plugin?("vagrant-gatling-rsync")
       config.gatling.latency = 2.5
       config.gatling.time_format = "%H:%M:%S"
     end
    
     # Automatically sync when machines with rsync folders come up.
     config.gatling.rsync_on_startup = true
    
    Add this code in Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| function of your VagrantFile. You can also refer to VagrantFile line number: 108 & 114 given in https://github.com/navneet0693/high-sierra-vagrant-problem.
     
  3. Important: Change vagrant_synced_folders type from `type: nfs` to `type: rsync` in box/config.yml. Sync folder is mostly your project repo, which shared b/w host (mac) and guest (vagrant box).
     
    vagrant_synced_folders:
      # The first synced folder will be used for the default Drupal installation, if
      # build_makefile: is 'true'.
      - local_path: ..
        destination: /var/www/demo-drupal
        type: rsync
        create: true
    
  4. Then you will have provision your Vagrant again.
    vagrant reload --provision

Looking for a Drupal partner ?

We are drupal 8 ready

Design
  |  15th April, 2017

QED42 Design Studio -- Making

Archita Arora

 

We started our Delhi operations in Nov 2015, though finding the place was a little difficult but we managed to find a nice quiet basement, armed with functional furniture and an internet connection, we got started. Though we liked our cozy new office, we soon realised that it had terrible cellular and internet network and after months of trying to get it fixed, trying different options we decided to move out,  since then checking for signal strength and internet connectivity at a place has topped our list of checklist :).

As almost all of our Design team operates from Delhi, it was natural for it to be inspired from good design studios and design in general. Our major source of inspiration was our neighbourhood coffee shop Blue Tokai, often taking breaks at the coffee shop we liked their space very much for its functionality and offbeat design / interiors.

After couple of discussions, we had a broad idea of things and vibe we wanted in our new place. These discussions allowed for a good checklist of things, things like natural light or an open area were important to us. With a comprehensive list came the tedious task of hunting for a place, it took us almost 2 months to find a place that offered what we wanted and was in our budget and we locked it down immediately. We started office setup project with a 3D Model of the office and figuring what goes where?

 

Once we had a clear picture of how we wanted our office to look like, we went on a hunting spree for lights. Lighting up a place makes a huge difference in creating a soothing ambience. So, we went on to explore an array of options for lights in light markets around Delhi, and finally picked up a variety from Khan market - tracking lights, ceiling surface lights, a suspended light, table and floor lamps, and some really cool hanging lamps. Staying true to our inspiration we chose a colour combination of White, Cool Grey, Earthy Brown and a hint of Black for the interiors. So we decided to have all our lights have a black body.

We spent some time in researching about the furniture we wanted for our office space and we realised that getting the right type of furniture is quite a tedious task. We wanted the furniture to look contemporary, with all comfortable features and which also suited our office space and interiors so we had to be clear about certain questions like - “How much amount of money were we going to allocate for the furniture?” “What type of chair or desks will we be needing for the office?” We finally settled on having custom made furnitures from Kirti Nagar in Delhi. Two long, white tables were added to the main office area and for the cafeteria separate set of wooden table and chairs was set up. For a more comfortable working experience we’ve included two bean bags as well. Any office space cannot be complete without having a little space for spending quality time reading books. We have a set up our own little library in the office with a stylish book shelf where we can take some time out of the busy schedule and just relax for a while.

Talking about comfort, it is also important to have proper air conditioning especially in a comparatively hotter place like Delhi. Our office has a split air conditioning system since we have three different work areas. One is the main workspace and the other two are conference rooms.

It is sometimes likely to happen that some of your ideas won’t get the chance to be executed. For us as well, there were a few ideas that did not take shape because of various reasons like keeping a cost limit, time constraints, availability in the city, etc. But we were quite happy with the way everything had turned out.

Cafeteria

We  did not want to separate our workspace and cafeteria by creating walls all around. Instead, the two different kinds of flooring does the job for us, by creating a visual difference. The cafeteria has wooden flooring which makes it look like a completely different section from the workspace which has carpet tiles flooring.

Most of the time we are seen lying on the oh so comfortable carpet (thanks to Money Bhaiya who keeps it clean all the time).

When we meet at lunch break or anytime during the day, it is a matter of a lot of fun! We catch up about everything apart from work. It is a place where we laugh, talk and connect with each other. We definitely don’t miss out on the chance to pull Manoj’s leg. Aren’t these little moments of joy we share with each other that make office less boring and more homely?
 

The Reading area

Having a cozy, isolated space for a person who loves to read or for the one who just wants to spend some alone time is what the reading area is all about. It surely helps us get our energy and motivation back.

We are a team of motivated people who like to compare work to an adventurous journey where we daily explore so many things and learn something everyday. Sometimes, we have a lot of work pressure, and the reading area gives us a respite from the restlessness of the day. It acts as a zen space for us to calm our minds down. No one would deny the fact that such breaks are really important to meet deadlines.

The reading area gives us the opportunity to think and grow with a peaceful mind and come up with effective solutions because sticking to the laptops the entire day, does not help to find the solution always..

And don’t forget the comfort of the bean bags!

Plants all around

‘GO GREEN’ because that’s the need of the hour and we unfailingly follow that.

From the workstations to the balcony, there are plants kept in beautiful white pots. They not only add the green to our palette, but also make our space fresh and clean.
 

Designers’ cabin

This is ideally the brainstorming room where we discuss and create products and services with prolonged discussions and iterations. We’ve enhanced the look of this area by placing colour co-ordinated chairs and a round table. To make it more designer-like and artsy there stands an easel with a white board and a black board as we love using chalks!

Balcony

When we feel like taking a glimpse of nature we go outside in the balcony where we have put up hanging plants and four chairs. We can sit and relax while we take a good sip of coffee with the nature.

We do have our fun benefits of working here such as:

Our all time stocked up fridge makes every hour enjoyable, and Happy hour is not the only hour we enjoy.

When we are too lazy to travel back home after work or just want to take our mind off the work, we like to let our fellow mates showcase their talent and entertain us with some good music. Jaideep and Zango sir lighten up the mood in the office by playing some beautiful music on flute (which Jaideep carries everyday in his bag) and guitar.

--------------

What we sincerely believe in, is the fact that the mood of a workspace certainly plays an important role in defining the efficiency, productivity and creativity of a person working there. The words ‘fun’ and ‘work’ don’t have to be antonymous because we can always have fun while we work. And we do have a lot of fun here!

We have been extremely focused while getting this studio built from scratch. It is quite a challenge because you have to keep in mind all the possible setbacks that may arise while setting up a place. But sometimes it is necessary to decide what’s best for us and hence, we also had to reject a few ideas.


P.S. - We certainly have missed out a few things here and there (like we sometimes laugh about how a fully equipped office does not have a power outlet in the conference room). We plan to complete the final look and feel, by the end of the third quarter this year.


However, we have very well managed to put up a workspace that we already love.

From the lights to the furniture, from the reading space to the cafeteria; everything looks perfect. And we’re sure you can’t wait for the pictures of our new office to come.

Kudos to the team!

Design with us !

Let's make something great together .